Google’s Multitask Unified Model (MUM) Transforms How Google AI Understands Complex Queries

Google launches a Multitask Unified Model (MUM) that offers expert-like answers to user’s questions with fewer queries to complex tasks. With google search, one can get answers to what they are looking for. However, one needs to type out many questions and perform many searches to get the desired results. Most of us generally require multiple steps (eight search queries on average) to tackle a task with Google. Present search engines are not advanced enough to respond with expert-like answers.  

For instance, a hiker who has climbed Mt. Adams now wants to hike Mt. Fuji and is looking for answers on how differently he/she should prepare for the climb in the fall. With Google, he/she will have to run thoughtful queries to know about mountain elevation, the average temperature, the difficulty of the hiking trails, the kind of gear to use, etc. Eventually, after running multiple queries, he/she might get the desired answers. However, a hiking expert would answer the same question thoughtfully, considering the nuances of the task at hand and guiding you through the things to consider.  

Google bridges this gap to understand the complex query tasks by using the MUM approach. While MUM is also based on a Transformer architecture similar to BERT, MUM has 1000 times more functions. Along with understanding language, it also generates it. The researchers have used around 75 different languages and several tasks at a time to train the model. This training method allowed them to develop a model having a better comprehensive understanding of information and world knowledge than previously developed models. Therefore, MUM will be able to understand the context of hiking and hiking activities and provide other necessary results, including essential breathing exercises, special equipment, etc. 

Removing Language Barriers

Language is many times an obstacle while accessing information. MUM can transfer knowledge across languages, learning from sources across language apart from the one used in the query. It can then bring about valuable insights and the most relevant results in the user’s preferred language. 

Source: https://blog.google/products/search/introducing-mum/

The multimodal feature: Enables model to understand information across several formats

Moreover, being multimodal, MUM understands information across text and images. The team plans to extend this understanding to other forms such as video and audio. In the future, users may use a photo of their hiking boots to ask whether they “can use these to hike Mt. Fuji?”. MUM would be able to understand the image and connect it with the asked query to respond with an answer.  

Source: https://blog.google/products/search/introducing-mum/

The researchers state that every improvement to Google Search undergoes a precise evaluation process to ensure that only relevant and helpful information is shared with the users. Similarly, MUM will undergo the evaluation process, where they will also look out for patterns indicating bias in machine learning(ML). This is necessary to avoid the introduction of bias into the systems. The team also mention about reducing the carbon footprint of training systems like MUM to ensure efficient performance of the search.

Source: https://blog.google/products/search/introducing-mum/

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.