Facebook AI Uses Reverse Engineering Generative Models From A Single Deepfake Image To Study And Detect Deepfake

2101
Source: https://ai.facebook.com/blog/reverse-engineering-generative-model-from-a-single-deepfake-image/

Deepfakes have become increasingly convincing over the years. In collaboration with Michigan State University (MSU), Facebook has presented a research method of detecting and attributing Deepfakes based on reverse engineering from a single AI-generated image to the generative model used to produce the image. The technique will allow deepfake detection and tracing in a real-world scenario, where the often the only information available for deepfake detectors is the image itself.

Currently, methods used in discussing deepfakes emphasize identifying whether an image is real or a deepfake (detection) or identifying whether it was generated by a model seen during training or not.

Reverse engineering is a different method of approaching the problem of deepfakes. Facebook’s reverse engineering method is based on unearthing the unique patterns behind the AI model used to generate a single deepfake image. It starts with image attribution and then work on discovering properties of the model used to create the image. It can infer more information about the generative model used to create a deepfake by generalizing image attribution to open-set recognition.

It could also tell whether a series of images originated from a single source by tracing resemblances among patterns of a collection of deepfakes. The ability to detect which deepfakes have been created from the same AI model can help find instances of coordinated misinformation or other malicious attacks launched using deepfakes.

The researchers started running a deepfake image through a fingerprint estimation Network (FEN) to find details about the image fingerprint left by the generative model. Just like device fingerprints, image fingerprints are unique patterns left on images created by a generative model. Therefore, it can be used to identify the generative model from which the image came.

It used the properties of image fingerprints as the basis for developing constraints to perform unsupervised training. Next, the researchers used different loss functions to apply constraints to FEN to force the generated fingerprints to have the desired properties. The fingerprints can then be used as inputs for model parsing. Thus, Facebook’s reverse engineering technique is like recognizing the components of a car just based on how it sounds, even if it is a new car.

Through the model parsing approach, the researchers estimated both the network architecture of the model used to create a deepfake and its training loss functions. They also normalized some continuous parameters in network architecture to make it easy for training. Finally, they performed hierarchical learning to classify the loss function types. Mapping from the deepfake or generative image to the hyperparameter space allowed researchers to obtain a critical understanding of the model’s features used to create it.

The research team put together a fake image data set with 100,000 synthetic images generated from 100 publicly available generative models to test the approach. Next, the research team replicated real-world applications by performing cross-validation to train and evaluate the models on different splits of their data sets.

Since Facebook is the first to conduct model parsing, there are no existing baselines for comparison. Thus, they formed a baseline called random ground-truth by randomly shuffling each hyperparameter in the ground-truth set. The results showed that their method performs better than the random ground-truth baseline. This indicates a more robust and generalized correlation between generated images and the embedding space of meaningful architecture hyperparameters and loss function types.

Along with model parsing, their FEN can be used for deepfake detection and image attribution. They added a shallow network that inputs the estimated fingerprint and performs a binary or multi-class classification for both tasks.

All the experiments on the reverse engineering process and the fake face image generation process were conducted at MSU. Open-source the data set, code, and trained models to the broader research community to facilitate the research in various domains will be provided by MSU.

Facebook’s research helps better understand deepfake detection, introducing the concept of model parsing that is more suited to real-world applications. Their work will also provide researchers and practitioners tools to investigate incidents of coordinated misinformation better using deepfakes.

https://ai.facebook.com/blog/reverse-engineering-generative-model-from-a-single-deepfake-image/

Source: https://ai.facebook.com/blog/reverse-engineering-generative-model-from-a-single-deepfake-image/