Nvidia launches its upgraded version of StyleGAN by fixing artifacts features and further improves the quality of generated images. StyleGAN being the first of its type image generation method to generate very real images was launched last year and open-sourced in February 2019.
StyleGAN2 redefines state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality. According to the research paper, In StyleGAN2, several methods and characteristics are improved, and changes in both model architecture and training methods are addressed.
Major Improvements in StyleGAN2:
- Faster Training Method
- The quality of the newly generated images are very high quality (higher FID scores and fewer artifacts)
- Better Style-mixing
- Smoother interpolation (extra regularization)
[bctt tweet=”Nvidia’s StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN”]
Unofficial implementation of StyleGAN 2 using TensorFlow 2.0. : https://github.com/manicman1999/StyleGAN2-Tensorflow-2.0