Researchers from UCLA and Snap Introduce Dual-Pivot Tuning: A Groundbreaking AI Approach for Personalized Facial Image Restoration

Image restoration is a complex challenge that has garnered significant attention from researchers. Its primary objective is to create visually appealing and natural images while maintaining the perceptual quality of the degraded input. In cases where there is no information available concerning the subject or degradation (blind restoration), having a clear understanding of the range of natural images is critical. To restore facial images, it is essential to include an identity before ensuring that the output retains the individual’s unique facial features. Previous research has looked into using reference-based face image restoration to address this requirement. However, integrating personalization into diffusion-based blind restoration systems remains a persistent challenge.

A team of researchers from the University of California, Los Angeles, and Snap Inc. have developed a method for personalized image restoration called Dual-Pivot Tuning. Dual-Pivot Tuning is an approach used to customize a text-to-image prior in the context of blind image restoration. The process involves utilizing a limited set of high-quality images of an individual to enhance the restoration of their other degraded images. The primary objectives are to ensure that the restored images exhibit high fidelity to the person’s identity and the degraded input image while maintaining a natural appearance. 

The study discusses diffusion-based blind restoration methods that might not effectively preserve the unique identity of an individual when applied to degraded facial images. The researchers highlight previous efforts in reference-based face image restoration, citing various methods such as GFRNet, GWAINet, ASFFNet, Wang et al., DMDNet, and MyStyle. These approaches leverage single or multiple reference images to achieve personalized restoration, ensuring better fidelity to the distinct features of the person in the degraded images. The proposed technique differs from previous methods using a diffusion-based personalized generative prior, while other methods use feedforward architectures or GAN-based priors.

The study outlines the method for personalizing guided diffusion models for image restoration. Dual-Pivot Tuning technique involves two steps: text-based fine-tuning to embed identity-specific information within diffusion priors and model-centric pivoting to harmonize the guiding image encoder with the personalized priors. The personalization operator of text-to-image diffusion models is defined where a model is fine-tuned with a pivot to create a customized version. The technique involves in-context textual pivoting, injecting identity information, followed by model-based pivoting, which utilizes general restoration before achieving high-fidelity restored images.

The proposed Dual-Pivot Tuning technique for personalized restoration achieves high identity fidelity and natural appearance in restored images. Qualitative comparisons show that diffusion-based blind restoration approaches may not retain the individual’s identity. At the same time, the proposed technique maintains high identity fidelity without perceivable loss in fidelity to the degraded input. Quantitative evaluations using metrics such as PSNR, SSIM, and ArcFace similarity demonstrate the effectiveness of the proposed method in restoring images with high fidelity to the person’s identity.

In conclusion, the proposed technique for personalized restoration via Dual-Pivot Tuning achieves high identity fidelity and natural appearance in restored images. Experiments exhibit the superiority of the proposed method compared to various state-of-the-art alternatives for blind and few-shot personalized face image restoration. The customized model shows improved fidelity to the person’s identity and outperforms generic priors regarding general image quality. The method is agnostic to different types of degradation and provides consistent restoration while retaining identity. 


Check out the┬áPaper┬áand┬áProject.┬áAll credit for this research goes to the researchers of this project. Also,┬ádonÔÇÖt forget to join┬áour 35k+ ML SubReddit,┬á41k+ Facebook Community,┬áDiscord Channel,┬áLinkedIn Group,┬áTwitter, and┬áEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

­čÉŁ Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...