HoloGAN (A new generative model) learns 3D representation from natural images

0
596
Image Source: https://github.com/thunguyenphuoc/HoloGAN

A group of researchers proposed a new generative adversarial network (GAN) using natural images to perform unsupervised learning for 3D representations.

Unlike most GAN model, which depends on 2D kernels to generate images to create blurry images or artifacts in tasks that require a strong 3D understanding, HoloGAN learns from 3D models and realistically showcase this representation.

Paper: https://arxiv.org/pdf/1904.01326.pdf

Github: https://github.com/thunguyenphuoc/HoloGAN

Source: https://www.youtube.com/watch?v=z2DnFOQNECM&feature=youtu.be

Dataset

CelebA:  http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html

LSUN: Dataset and pre-process code  https://github.com/fyu/lsun

ShapeNet Chair: https://drive.google.com/file/d/18GXkDR5Fro8KCldYCcmJXoCEY9iunPME/view?usp=sharing

Cats: Dataset and pre-process code https://github.com/AlexiaJM/RelativisticGAN/tree/master/code

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.