Microsoft AI Researchers Introduce A Neural Network With 135 Billion Parameters And Deployed It On Bing To Improve Search Results

3056
Source: https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/

Transformer-based deep learning models like GPT-3 have been getting much attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience. However, these models can fail to capture more nuanced relationships between query and document terms beyond pure semantics.

The Microsoft team of researchers developed a neural network with 135 billion parameters, which is the largest “universal” artificial intelligence that they have running in production. The large number of parameters makes this one of the most sophisticated AI models ever detailed publicly to date. OpenAI’s GPT-3 natural language processing model has 175 billion parameters and remains as the world’s largest neural network built to date.

Microsoft researchers are calling their latest AI project MEB (Make Every Feature Binary). The 135-billion parameter machine is built to analyze queries that Bing users enter. It then helps identify the most relevant pages from around the web with a set of other machine learning algorithms included in its functionality, and without performing tasks entirely on its own.

MEB is a great complement to Transformer-based deep learning models because it can map single facts and features, which allows MEB to gain more nuanced understanding. For example, many DNN (deep neural network) language models may overgeneralize when filling in the blank of this sentence: “(blank) can fly.” Since most training cases result with “birds” being able to fly, some DNNs may only fill in the word “bird” for that blanks as well; however MEBs feature mapping helps by not just relying on one or two examples but instead giving extra consideration at every possible outcome.

https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/

The MEB has finally enabled 100% coverage in all Bing searches. Unlike other models that can be a bit rigid and static with features, this one is able to learn from vast amounts of data continuously while reliably remembering facts represented by binary features.

The Microsoft Bing team discovered that the addition of MEB to its search engine yielded a 2% increase in clickthrough rates and more than 1% reduction in users rewriting queries because they did not receive any relevant results.

Source: https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/