Question-answering systems are the backbone of our digital lives. From search engines to personal assistants, we use them every day and never even realize it! For example, when you ask a question like “Where was Leonardo da Vinci born?” these intelligent computer programs need to gather background knowledge about him (Leonardo’s birthplace is Italy) as well as computational reasoning over that information in order for an answer to be generated – which will often happen automatically without us even realizing what happened behind the scenes.
In recent AI research, background knowledge is usually available in the form of Knowledge Graphs (KGs) and Language Models (LMs) which are pre-trained on a large set of documents. KG’s represent entities as nodes and relations between them as edges, e.g., [Leonardo da Vinci — born in – Italy]. Some other examples of KGs include Freebase (general-purpose facts), ConceptNet (commonsense) and Examples of pre-trained LMs include BERT (trained on Wikipedia articles and 10,000 books), RoBERTa (extending BERT), etc.
Both the knowledge sources have complementary strengths. LMs can be pre-trained on any unstructured text and thus cover a broad scope of information, while KGs are more structured, helping with logical reasoning by connecting randomly generated statements like “People breathe” to logically related ones like “The birthplace of the painter is Italy.
In this research paper, published at NAACL 2021, researchers found that combining both LMs and KGs makes it possible to answer questions more effectively. Existing systems that use LM and KGs tend to be noisy, and the interactions between QA context and KG are not modeled. While this research work offers promising solutions: a) estimating the relevance of nodes in graphs conditioned on the query being asked; b) connecting related elements as joint graph models their relationship.
The researchers design a system that uses an LM and KG to answer questions. First, as is common in existing systems, they use the LM to obtain a vector representation of the question’s context and retrieve relevant information from their knowledge graph by linking entities together. Then, for more accurate results from KG, when identifying informative nodes on which there are conditioned on the context of questions, they estimated relevance concerning this contextualized query. Please see the image below for KG Relevance Scoring.

Next, the researchers connected the QA context and KG as a joint graph. They updated their representations for each system before combining them to predict an answer better than either of them alone.
They applied their innovative question answering model, called QA-GNN, to two challenging QA benchmarks that require reasoning with knowledge. (1) CommonsenseQA and (2) OpenBookQA.
In conclusion, the researchers via this study created a new model that would answer questions better by combining two sources of background knowledge. They introduced the QA-GNN, which has two innovative aspects: (1) KG relevance scoring – Researchers used the pre-trained LM to score nodes on KGs conditioned on a question. This is a general framework for weighting information on Knowledge Graphs (KGs). (2) Joint reasoning over text and KGs: Researchers connected the context of questions with content using LMs and graph neural networks.
Github: https://github.com/michiyasunaga/qagnn
Worksheet: https://worksheets.codalab.org/worksheets/0xf215deb05edf44a2ac353c711f52a25f
Paper: https://arxiv.org/pdf/2104.06378.pdf
Stanford Blog: https://ai.stanford.edu/blog/qagnn/
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.