CMU Researchers Introduce a Content-based Search Engine for Modelverse, a Model-Sharing Platform that Contains a Diverse Set of Deep Generative Models

The goal of the content-based model search is introduced, which tries to locate the most relevant deep image generative models that fulfill a user’s input query. As indicated in the graphic below, a user can receive a model based on its ability to synthesize images that fit an image query (e.g., a landscape shot), text query (e.g., African animals), sketch query (e.g., a drawing of a standing cat), or resemblance to a provided query model. But why is a model search based on helpful content? Deep generative models are being developed as the foundation for content creation software and applications. They are no longer just the results of scientific studies.

The search method (1st row) allows searches with four distinct modalities-text, photos, drawings, and existent models-from left to right. The top two models are shown in the second and third rows. The color of each model icon indicates the type of model. In all modalities, the technique discovers applicable models with comparable semantic notions. | Source: https://arxiv.org/pdf/2210.03116v1.pdf

Each model represents a miniature world of carefully chosen themes, which might include realistic renderings of people and landscapes, images of ancient pottery, cartoon caricatures, and aesthetic aspects from a single artist. More recently, various techniques have made it possible to creatively alter and customize existing models, whether through human-in-the-loop interfaces or fine-tuning GANs and text-to-image models. Each generative model could signify the model creator’s significant engagement in a particular concept. It is becoming increasingly impossible for a user to be aware of every fascinating generative model, even if it might be vital to select the best model for their particular need.

The ability to quickly synthesize an unbounded set of images, interpolations, or latent variable manipulations is provided by each generative model. Still, researchers have discovered that selecting the ideal model from an extensive collection can produce outcomes significantly better than those obtained from choosing an inappropriate model. Model search makes it possible for users to locate a model that most closely matches their unique requirements, much as information and image retrieval makes it possible for users to find the appropriate information within vast collections of traditional material. The challenge of content-based model search is tough; even the straightforward inquiry of whether a single model can generate a particular image can be computationally demanding.

👉 Read our latest Newsletter: Microsoft’s FLAME for spreadsheets; Dreamix creates and edit video from image and text prompts......

Unfortunately, many deep generative models do not provide an efficient or accurate method of estimating density nor enable natively measuring cross-modal similarity (e.g., text and image). A naïve Monte Carlo technique can compare the input query to dozens or millions of samples from each generative model and select the model whose pieces match the input query the most frequently. The model search would be prolonged with such a sampling-based technique. They first provide a generic probabilistic formulation of the model search problem to address the abovementioned issues, followed by a Monte Carlo baseline. To save time and space, they “compress” the model’s distribution into pre-computed 1st and 2nd order moments of the original samples’ deep feature embeddings.

Then, they construct closed-form solutions for model retrieval using an input picture, text, sketch, or model query. Real-time computations can be made of their ultimate formula. On 133 deep generative models, including GANs (e.g., StyleGAN-family models), diffusion models (e.g., DDPM), and auto-regressive models, they assess their methods and conduct ablation tests (e.g., VQGAN ). Their approach offers significantly faster search (within 0.08 milliseconds, a 5x speedup) while maintaining good accuracy compared to the Monte-Carlo baseline. Finally, they show how to use GAN inversion and few-shot model fine-tuning as model search applications.

Their solution, to their knowledge, is the first content-based search algorithm for machine learning models. The search method is deployed to Modelverse, an online platform for academics, students, and artists to efficiently utilize and share generative models, which can be found at https://modelverse.cs.cmu.edu/.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Content-Based Search for Deep Generative Models'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, github link, Modelverse and project.

Please Don't Forget To Join Our ML Subreddit and Youtube Channel

Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.