So how does our brain manage to complete the melody of a song, for example, even though it is played at a fast or slow tempo? To answer this question, scientists from the UKD, in collaboration with New York University, generated sequences of tones whose temporal regularities strongly resemble those of natural stimuli such as speech or music. After presenting these tone sequences, subjects were asked to predict the last tone of each sequence in a study while their brain activity was measured using magnetoencephalography (MEG). The speed with which the tone sequences were presented was systematically varied.
This allowed the scientists to test two opposing hypotheses: Does the brain base its prediction of future stimuli on a fixed amount of time, e.g. the last 30 seconds of the tone sequence, or on a constant amount of information, e.g. the last five tones of the tone sequence?
As a result, they were able to show that our brain pays more attention to the number of past stimuli than to a fixed duration of time in order to predict what will happen next. Using a novel and realistic paradigm, this study reveals a fundamental functioning of human perception and sheds light on the underlying neural mechanisms.
Original publication: Thomas J. Baumgarten, Brian Maniscalco, Jennifer F. Lee, Matthew W. Flounders, Patrice Abry, Biyu J. He: Neural integration underlying naturalistic prediction flexibly adapts to varying sensory input rate, DOI: 10.1038/s41467-021-22632-z
Contact: Dr. rer. nat. Thomas J. Baumgarten,
Research Associate / Postdoctoral Researcher, Institute for Clinical Neuroscience and Medical Psychology,
University Hospital Düsseldorf,