Baidu researchers published a paper on the 3.0 version of Enhanced Language RepresentatioN with Informative Entities (ERNIE), a deep-learning model for natural language processing (NLP). The model has 10B parameters and outperformed the human baseline score on the SuperGLUE benchmark, achieving a new state-of-the-art result.
In a blog post on Baidu’s website, the model and various trials were described. ERNIE’s training data incorporates structured knowledge graph data, which helps the model output more coherent responses, unlike most other deep-learning NLP models trained only on unstructured text.
The model consists of a “backbone,” Transformer-XL, to encode the input to a latent representation, along with two different decoder networks: one for (NLU) natural language understanding and other for (NLG) natural language generation
ERNIE not only established a new high score on SuperGLUE, displacing Microsoft and Google, but he also set new high scores on 54 Chinese-language NLP tasks.
Although big deep-learning models trained solely on text, such as OpenAI’s GPT-3 or Google’s T5, excel at a wide range of tasks, researchers have discovered that these models struggle with particular NLU tasks that need world knowledge not contained in the input text.
To solve this, Tsinghua University researchers released the first version of ERNIE. This model combines text and knowledge graph data in early 2019, while Baidu released the 2.0 version later that year, the first model to score greater than 90 on the GLUE benchmark.
ERNIE 3.0 is a deep neural network that can be trained on text using the same unsupervised techniques used for other models, such as GPT-3. The Baidu team created a new pre-training task called universal knowledge-text prediction (UKTP) to incorporate knowledge graph data into the training process. In this task, the model is given a sentence from an encyclopedia and a knowledge graph representation of the sentence. Part of the data is randomly masked; the model must then predict the correct value for the masked data. Overall, the training dataset was 4TB, the most extensive Chinese text corpus to date, according to Baidu.
ERNIE’s performance was assessed on a variety of downstream tasks by the researchers. The team trained NLU on 45 different datasets for 14 different tasks, including sentiment analysis, news classification, named-entity identification, and document retrieval. Text summarization, closed-book question answering, machine translation, and dialogue synthesis are among the nine datasets and seven tasks for NLG. ERNIE set new state-of-the-art performance scores on every task. To measure zero-shot NLG performance, human annotators were asked to score the output from ERNIE and three other models. According to these results, ERNIE generated “the most coherent, fluent and accurate texts on average.”

MIT researchers recently integrated a GPT-3 deep-learning model with a symbolic world state model to improve the coherence of GPT-3’s text generation, and Berkeley researchers combined a neural question-answering system with Dr. Fill, a “traditional AI” crossword-puzzle solver.
Although Baidu has not shared the code and models for ERNIE 3.0, version 2.0 is available on GitHub. On Baidu’s website, there is also an interactive demo of ERNIE 3.0.
Paper: https://arxiv.org/abs/2107.02137
Github ERNIE 2: https://github.com/PaddlePaddle/ERNIE
Demo: https://wenxin.baidu.com/wenxin/ernie
BAIDU Blog: http://research.baidu.com/Blog/index-view?id=160
Source: https://www.infoq.com/news/2021/08/baidu-ernie-superhuman-ai/
Sanskriti is currently pursuing her bachelor’s in Journalism, Psychology, and English and is enthusiastic about getting to know new people, uncovering their stories, and engaging with the atmosphere. She has an inclination towards news affairs, writing and teaching.