This Article Is Based On The Meta Research Article 'Studying the brain to build AI that processes language as people do'. All Credit For This Research Goes To The Researchers Of This Research 👏👏👏 Please Don't Forget To Join Our ML Subreddit
Meta AI is embarking on a long-term research initiative to better understand how the human brain interprets language. The research will be conducted in conjunction with NeuroSpin, a neuroimaging institute, and Inria, the French National Institute for Research. Meta AI will compare how AI language models and human brains respond to identical spoken or written words.
Language artificial neural networks are growing closer and closer to matching human brain function, offering fresh light on how thinking may be implemented in neural tissue. AI models that most closely replicate human language now do so by methodically breaking down phrases by examining the context and attempting to anticipate the next word using machine learning.
Although these technologies may give consumers a false impression of “humanness,” models forecast the next word based on large databases of how previous conversations progressed. On the other hand, human brains anticipate words and thoughts in advance, considering all that the statement or concept may imply.
Giving an AI model the phrase “Once upon a” and having it guess the following word is an example of a one-time procedure of predicting the word “time.” When a person who has been reared on fairy tales hears “Once upon,” their brain does more than simply forecast “time” as the next word. It also conjures all of the magical conceptions that come with it, such as terrible witches, dragons, castles, heroes, and other culturally significant figures.
Brains create particular “brain states” when they make these predictions, which may be visualized during brain imaging. Snapshots of brain activity were taken while volunteers read or listened to a narrative using functional magnetic resonance imaging and magnetoencephalography scanners. Researchers noticed something unusual when using machine learning on brain scans from public data sets paired with new fMRI and MEG images. According to the findings, language processing in the human brain resembles ordered hierarchies, comparable to how AI language models operate.
There are parts in the brain comparable to visual processing algorithms that activate when words generate visual stimuli, areas similar to word comprehension algorithms, and whole networks that operate similarly to AI language transformers.
Specific brain areas are involved in vision and language processing, and their interactions build networks for generating narratives and representations for comprehension. The findings revealed that particular brain areas, such as the prefrontal (front of brain) and parietal (middle of brain) cortices, better-represented language models with far-off future word predictions.
This suggests that the internal representations shared by brains and algorithms are helpful for the algorithm to process language. This result was rapidly validated after studying the brain activations of 200 participants in a simple reading test. Then, approximately a week later, a Massachusetts Institute of Technology team did an independent investigation with remarkably identical results.
This research provides new insights into brain processes by using this research to build quantitative parallels between human brains and AI models. AI that can behave and react more in sync with human language use will naturally connect with humans. To learn more, refer here.