When we are reading, conversing, or listening to a book, our brain
continually tries to predict the next word, much like the autocomplete
feature on your phone. Our brains continually make predictions at several
levels, from meaning and grammar to individual speech sounds, in contrast to
speech recognition algorithms. In a recent study, scientists from the
Donders Institute at Radboud University and the Max Planck Institute for
Psycholinguistics found this. Their research is made public in PNAS.
This is consistent with a current hypothesis about how the brain functions,
according to which it is a prediction machine that constantly compares
sensory data we take in—like sights, sounds, and language—with internal
predictions. According to the primary author, Micha Heilbron, "This
theoretical theory is quite popular in neuroscience, but the proof for it is
frequently indirect and limited to contrived scenarios." "I really want to
know how this works exactly and test it in different scenarios."
Heilbron explains that the majority of brain study on this phenomena takes
place in a controlled environment. Participants are instructed to listen to
basic patterns in sounds like "beep beep boop," "beep beep boop," or a
single pattern of moving dots for 30 minutes in order to elicit
predictions." These kinds of studies do in fact show that our brains are
capable of making predictions, but not necessarily in the complicated
situations that arise in daily life. We're attempting to remove it from the
lab environment. We are examining the same kind of phenomenon—how the brain
processes unexpected information—but in far less controlled natural
settings."
Holmes and Hemingway
The scientists examined the brain activity of persons who were listening to
Hemingway or Sherlock Holmes stories. In parallel, they used computer models
known as deep neural networks to analyze the book passages. In this manner,
they were able to determine how unexpected each word was.
The brain creates intricate statistical predictions for each word or sound
and has been shown to be very sensitive to the level of unpredictability;
the brain's reaction is higher anytime a word is unexpected in the context.
"This is not really unexpected on its own because everyone is aware that you
may occasionally forecast forthcoming language. For instance, if someone
starts to talk very slowly, stutters, or is unable to think of a word, your
brain may immediately "fill in the blank" and mentally complete their
phrases. However, what we have demonstrated here is that this continues to
occur. The predictive machinery in our brain is continually active, making
guesses at words."
Beyond software
"Our brain really performs a function akin to voice recognition software.
Similar to the autocomplete feature on your phone, speech recognisers
powered by artificial intelligence continually make predictions and let
those guesses lead them. However, we found a significant difference: brains
anticipate not just words, but also things like abstract meaning, syntax,
and even particular sounds."
Tech companies' continued interest in using fresh insights of this sort to
improve software for things like language and picture recognition has good
cause. But Heilbron does not have these applications as its primary focus.
"I genuinely want to comprehend the underlying principles of how our
predictive equipment operates. I'm currently using the same study setting
for auditory and visual experiences, such as music."