NVIDIA Releases Imaginaire: A Universal PyTorch Library Designed For Various GAN-Based Tasks And Methods

0
965
Source: https://www.youtube.com/watch?v=jgTX5OnAsYQ&feature=youtu.be

NVIDIA has developed a universal PyTorch library, Imaginaire, with an optimized implementation of various GAN images and video synthesis. 

The Imaginaire library currently covers three types of models, providing tutorials for each of them:

  • Supervised Image-to-image translation
  • Unsupervised Image-to-image translation
  • Video-to-video translation 

Imaginaire utilizes different algorithms depending on the model type, including Coco-funit, SPADE/ GauGan, Multimodal Unsupervised Image-to-image translation, etc. 

One of the projects developed using Imaginaire is Coco-Funit. It was trained using NVIDIA DGX1 with 8 V100 32GB GPUs. It transforms the input style in the form of content to produce the image-to-image translation. 

Supervised Image-to-Image Translation

Algorithm NameFeaturePublication
pix2pixHDLearn a mapping that converts a semantic image to a high-resolution photorealistic image.Wang et. al. CVPR 2018
SPADEImprove pix2pixHD on handling diverse input labels and delivering better output quality.Park et. al. CVPR 2019
https://github.com/NVlabs/imaginaire#supervised-image-to-image-translation

Unsupervised Image-to-Image Translation

Algorithm NameFeaturePublication
UNITLearn a one-to-one mapping between two visual domains.Liu et. al. NeurIPS 2017
MUNITLearn a many-to-many mapping between two visual domains.Huang et. al. ECCV 2018
FUNITLearn a style-guided image translation model that can generate translations in unseen domains.Liu et. al. ICCV 2019
COCO-FUNITImprove FUNIT with a content-conditioned style encoding scheme for style code computation.Saito et. al. ECCV 2020
https://github.com/NVlabs/imaginaire#supervised-image-to-image-translation

Video-to-video Translation

Algorithm NameFeaturePublication
vid2vidLearn a mapping that converts a semantic video to a photorealistic video.Wang et. al. NeurIPS 2018
fs-vid2vidLearn a subject-agnostic mapping that converts a semantic video and an example image to a photoreslitic video.Wang et. al. NeurIPS 2019
wc-vid2vidImprove vid2vid on view consistency and long-term consistency.Mallya et. al. ECCV 2020
https://github.com/NVlabs/imaginaire#supervised-image-to-image-translation
https://www.youtube.com/watch?v=jgTX5OnAsYQ&feature=youtu.be

Github: https://github.com/NVlabs/imaginaire#supervised-image-to-image-translation

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.