Facebook parent Meta announced that they are launching a long-term research project to build a new generation of artificial intelligence that can learn and process speech and text in the same way that the human brain does. Meta described an attempt to create human-level artificial intelligence.
Meta is partnering with neuroimaging company NeuroSpin, which visualizes the human brain, and with software company Inria, to study how the human brain processes speech and text and then compare that to how AI language models do.
NeuroSpin is a research center focused specifically on brain imaging. Researchers are made up of physicists, mathematicians, neuroscientists, and clinicians who work together to build tools to learn about the human brain in different ways.
“The research conducted focuses on neuroimaging, and ranges from technological and methodological advances (data acquisition and processing) to preclinical and clinical neuroscience, including cognitive neuroscience.”
Today, we’re announcing a long-term AI research initiative to better understand how the human brain processes speech and text. In collaboration with the Center for Neurospin Neuroimaging (CEA) and Inria, we compare how AI and brain models respond to the same spoken or written sentences.
We will use insights from this work to guide the development of AI that processes speech and text as efficiently as people.”
The problem with AI language models is that they need a lot of examples to learn. Human minds only need a few examples to learn.
Explore current research into brain-like AI language models:
“Language models that most closely resemble brain activity are those that best predict the next word from context (eg once in… once).
While the brain predicts words and ideas ahead of time, most language models are trained to only predict the next word. Unlocking this ability for long-range prediction could help improve modern AI language models.”
The announcement cited current research in modeling artificial intelligence on human brain activity that uses MRI scans and other imaging tools to display human brain activity when humans were performing various language-related tasks.
The paper cited from 2021 is titled: Language processing in brains and deep neural networks: Computational convergence and its limits (pdf).
A summary of the findings is discussed in the opening paragraphs of the research paper:
The results show that (1) layer position in the network and (2) the ability of the network to accurately predict words from context are the two main factors responsible for the emergence of brain-like representations in artificial neural networks.
Together, these results show how perceptual, lexical, and syntactic representations unfold precisely within each cortical region and contribute to revealing the governing principles of language processing in brains and algorithms.”
The significance of the above research is to show how research into how the brain processes data can lead to insights into creating similar processes in an algorithm.
Meta research teams use thousands of scans of human brain activity to see which brain regions were activated during tasks.
This research was said to show the “computational organization of the human brain” which yielded useful insights towards Meta’s goal of developing “human-level artificial intelligence”.
Not only are the benefits generating AI at a human level, the research is also helping neuroscientists better understand the human brain.