New Machine Learning Model Predicts Sentence Comprehension and Production Ability

Humans find some of the comprehension difficult to understand. There is a limitation to the ability to understand some sentences, and find it difficult to comprehend. Scientists have trained a new model which is capable of explaining the difficulty of comprehension.

Recent years have seen researchers’ effective development of two models to describe two distinct categories of phrase production and comprehension challenges. These models can accurately predict particular patterns of comprehension issues, but their predictions are constrained and fall short of the outcomes of behavioral trials. Furthermore, until recently, academics could not combine these two models into a unified explanation.  Thus, scientists have attempted a tradeoff between the precision of memory representations with better prediction.

A recent study conducted by academics from the MIT Department of Brain and Cognitive Sciences (BCS) offers a comprehensive explanation for language comprehension issues. The researchers created a model that better predicts how easily people produce and interpret words by building on recent developments in machine learning. In the most current issue of the Proceedings of the National Academy of Sciences, they reported their findings.

Lossy-context surprise suggests that human processing difficulties are governed by expectations obtained from probabilistic inference over faulty memory representations of the context rather than from a veridical context. This strategy might theoretically account for the predictions of both expectation-based and memory-based models. As predicted by expectation-based models, words are easy to process when they are easy to anticipate. However, if the relevant contextual information is poorly represented in memory, it may be difficult to anticipate upcoming words correctly, resulting in processing difficulties as predicted by traditional memory-based theories. A model of resource-rational language processing may be scaled to the actual language’s complex statistical structure. The key strategy based on machine learning might pave the way for fitting advanced, rational models to natural input data in other fields of human cognition.

Researchers assess the difficulty of understanding by measuring the time it takes for readers to complete various comprehension exercises. The understanding of a particular statement becomes more complex as the reaction time increases. Prior research demonstrated that Futrell’s unified model predicted readers’ comprehension problems more accurately than the two previous models. However, his model did not identify which portions of a phrase we tend to forget and how this failure in memory retrieval impedes understanding.

The researchers used GPT-2, an AI natural language tool based on neural network modeling, to see if this prediction fits human linguistic behavior. This machine learning technology, made available to the public for the first time in 2019, enabled researchers to test the model on large-scale text data in a previously impossible manner. However, the advanced language modeling capability of GPT-2 posed an issue. In contrast to humans, GPT-2’s flawless memory accurately reflects all the words in even very lengthy and complicated texts. To more properly represent human language understanding, the researchers incorporated a component that replicates human-like restrictions on memory resources, as in Futrell’s original model and utilized machine learning approaches to optimize how those resources are used, as in their new model. The resultant model retains GPT-2’s ability to successfully predict words most of the time but exhibits human-like failures in sentences with uncommon word combinations.

The researchers taught the machine learning model a series of statements with complex embedded clauses, such as “It was surprising that the patient was disturbed by the report that the lawyer’s mistrust of the doctor had irritated him.” The researchers then substituted the nouns at the beginning of these phrases — “report” in the above example — with other nouns, each with its own chance of occurring with a clause or not. Certain nouns made it simpler for the AI algorithm to “comprehend” the phrases in which they were placed. For example, the model properly predicted the conclusion of these phrases when they started with the more frequent phrase “The fact that” than when they began with the less frequent phrase “The report that.”

This concept generates a number of empirical predictions. A fundamental prediction is that readers compensate for their faulty memory representations by using their knowledge of the statistical co-occurrences of words to implicitly recreate the sentences they read in their brains. Therefore, sentences including uncommon words and phrases are more difficult to recall, making it more difficult to predict the next word. Consequently, such statements are often more difficult to interpret.

Check out the Paper and MIT Article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.

I am an undergraduate student at IIIT HYDERABAD pursuing Btech in computer science and MS in Computational Humanities. I am interested in Machine and Data learning. I am also actively involved in research on AI solutions for road safety.

🚀 The end of project management by humans (Sponsored)