DeepMind, in collaboration with YouTube, has unveiled a cutting-edge AI model, Flamingo, designed to enhance the searchability of YouTube Shorts videos. These short video clips, similar to the popular platform TikTok, often need more descriptive text and meaningful titles, making it easier for users to find specific content. However, with the introduction of Flamingo, users will now have a more effortless way to discover these videos.
Flamingo employs its advanced visual language model to generate explanatory text by analyzing the initial frames of YouTube Shorts videos. For instance, it can describe the scene as “a cat playing with a wool ball.” This generated text is stored as metadata, enabling more efficient video classification and facilitating search engine accessibility.
The impact of Flamingo has already been felt, as hundreds of thousands of newly uploaded Shorts videos have benefited from AI-generated descriptions. YouTube intends to gradually implement this technology across all Shorts videos, making them easier to find for viewers worldwide.
Flamingo represents the latest collaboration between DeepMind and YouTube, further solidifying the merging of DeepMind and Google Brain into a unified AI business group, as announced by Google in April. Their previous joint ventures include the utilization of DeepMind’s AI model, MuZero, to enhance YouTube’s VP9 codec for compressed transmission. Additionally, DeepMind and YouTube teamed up in 2018 to educate video creators on maximizing revenue by aligning advertisements with YouTube’s policies. This partnership resulted in the development of a label quality model (LQM), ensuring more accurate content labeling for improved advertising precision and fostering trust among viewers, creators, and advertisers on the platform.
Continuing their fruitful collaboration, DeepMind and YouTube worked on enhancing the user experience by introducing video chapters. This development led to the creation of an AI system capable of autonomously processing video and audio content transcriptions, providing suggestions for chapter segmentation and titles. This revolutionary function, known as AutoChapters, was unveiled by CEO Sundar Pichai during Google I/O 2022. With AutoChapters, users no longer need to search through lengthy videos painstakingly, as the AI system swiftly identifies key sections. This feature is already employed in 8 million videos, and DeepMind plans to expand its implementation to 80 million videos in the coming year.
Regarding Flamingo, the YouTube Shorts production team has clarified that the metadata generated by the AI model will not be visible to creators. The primary focus is to enhance search accuracy significantly. Additionally, Google ensures that the text produced by Flamingo adheres to its strict responsibility standards, avoiding any negative representation of video content.
As Flamingo begins its journey to revolutionize the searchability of YouTube Shorts videos, the accuracy of its AI labeling capabilities will be closely observed. In this era of advancing AI technologies, Flamingo is a testament to the collaboration between DeepMind and YouTube. Through their joint efforts, they continue to redefine the boundaries of AI innovation, fostering a more engaging and accessible environment for creators and viewers alike.
Check out the Twitter Thread and Blog. Don’t forget to join our 22k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.