Typically, blind face restoration will use facial priors such as geometry and reference. However, these aren’t particularly useful when the quality of input is low because it doesn’t offer accurate geometric prior or when high-quality references are inaccessible, so they can only be applied in real-world scenarios to a limited extent.
Researchers from Tencent AI propose their new GFP-GAN model to achieve a good balance of realness and fidelity in only one forward pass. The model consists of a degradation removal module and pretrained face generator as prior. They are connected by direct latent code mapping into coarse-to-fine channels using CS SFT layers. The CS-SFT layers perform spatial modulation on a split of features and leave the left ones to directly pass through for better information preservation, allowing the proposed method to incorporate generative prior while retraining high fidelity effectively. In addition, researchers introduce facial component loss with local discriminators to further enhance perceptual facial details and identity preserving gain to improve overall quality.
The GFP-GAN framework leverages the rich and diverse generative facial prior to creating a good balance of realness and fidelity in blind face restoration. This is achieved with channel split spatial feature transform layers, allowing us to surpass all other methods on both accuracy and generalization for real-world images. Extensive comparisons demonstrate this superior capability which outperforms anything done before it.
- Researchers use rich and diverse facial priors for blind face restoration.
- The proposed GFP-GAN with CS-SFT layers achieves a good balance of fidelity and texture faithfulness in one forward pass.
- According to this paper, the proposed method performs better than prior art on both synthetic and real-world datasets.