AI systems are increasingly being employed to accurately estimate and modify the ages of individuals using image analysis. Building models that are robust to aging variations requires a lot of data and high-quality longitudinal datasets, which are datasets containing images of a large number of individuals collected over several years.
Numerous AI models have been designed to perform such tasks; however, many encounter challenges when effectively manipulating the age attribute while preserving the individual’s facial identity. These systems face the typical challenge of assembling a large set of training data consisting of images that show individual people over many years.
The researchers at NYU Tandon School of Engineering have developed a new artificial intelligence technique to change a person’s apparent age in images while ensuring the preservation of the individual’s unique biometric identity.
The researchers trained the model with a small set of images of each individual. Also, they used a separate collection of images with captions indicating the person’s age category: child, teenager, young adult, middle-aged, elderly, or old. The image set includes the images of celebrities captured throughout their lives, while the captioned pictures explain the relationship between images and age to the model. Subsequently, the trained model became applicable for simulating either aging or de-aging scenarios, accomplished by specifying a desired target age through a text prompt. These text prompts guide the model in the image generation process.
The researchers used a pre-trained latent diffusion mode, a small set of 20 training face images of an individual(to learn the identity-specific information of the individual), and a small auxiliary set of 600 image-caption pairs(to understand the association between an image and its caption).
They used appropriate loss functions to fine-tune the model. They also added and removed random variations or disturbances in the images. Also, the researchers used a ” DreamBooth ” technique to manipulate human facial images through a gradual and controlled transformation process facilitated by a fusion of neural network components.
They assessed the accuracy of the model in comparison to alternative age-modification techniques. To conduct this evaluation, 26 volunteers were tasked with associating the generated image with an actual photograph of the same individual. Additionally, they extended the comparison to using ArcFace, a prominent facial recognition algorithm. The outcomes revealed that their method exhibited superior performance, surpassing the performance of other techniques, resulting in a reduction of up to 44% in the frequency of incorrect rejections.
The researchers discovered that when the training dataset has images from the middle-aged category, the generated images effectively represent a diverse range of age groups. Further, suppose the training set had images mostly from the elderly images. In that case, the model encounters challenges when attempting to generate pictures that fall into the opposite extremes of the spectrum, such as the child category. Furthermore, the generated images demonstrate a good capability to transform the training images into older age groups, particularly for men compared to women. This discrepancy might arise from the inclusion of makeup in the training images. Conversely, variations in ethnicity or race did not yield noticeable and distinguishable effects within the generated outputs.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
Rachit Ranjan is a consulting intern at MarktechPost . He is currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He is actively shaping his career in the field of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.