Research has revealed how the auditory system distinguishes music from speech using simple acoustic parameters. This understanding could improve therapies for language disorders such as aphasia, using music to help patients regain their speaking skills. Credit: Issues.fr.com
The study shows that our brain uses basic sound frequencies and patterns to distinguish music from speech, providing insights into improving therapies for speech disorders like aphasia.
Music and speech are some of the most common types of sounds we hear. But how do we recognize what we think are the differences between the two?
An international team of researchers mapped this process through a series of experiments, generating insights that offer a potential way to optimize therapy programs that use music to restore speech ability in the treatment of aphasia. This language disorder it guesses more than one in 300 Americans each year, including Wendy Williams and Bruce Willis.
Auditory insights from research
“Although music and speech differ in many ways, from pitch to sound texture, our results show that the auditory system uses surprisingly simple acoustic parameters to distinguish between music and speech,” explains Andrew Chang, a postdoctoral researcher at New York UniversityDepartment of Psychology and lead author paperwhich was published in the magazine today (May 28). Biology PLOS. “Overall, slower, regular sound clips consisting of simple sounds sound more like music, while faster, irregular sound clips sound more like speech. »
Scientists estimate signal flow with precise measurement units: Hertz (Hz). A higher Hz number means more occurrences (or cycles) per second than a lower number. For example, people typically walk at a pace of 1.5 to 2 steps per second, or the 1.5 to 2 Hz from Stevie Wonder’s 1972 hit. Superstition” is around 1.6 Hz, while Anna Karina’s hit from 1967 isA girl on wheels » In contrast, speech is generally two to three times faster than at 4-5 Hz.
It went well documented that the volume or intensity of a song over time (called “amplitude modulation”) is relatively stable at 1-2 Hz. In contrast, the amplitude modulation of speech is usually 4-5 Hz, meaning that its volume changes frequently.
Despite the ubiquity and familiarity of music and speech, scientists have not clearly understood how we automatically and effortlessly identify sound as music or speech.
Experimental results of sound perception
To better understand this process in our Biology PLOS In this study, Chang and colleagues conducted a series of four experiments in which more than 300 participants listened to a series of audio segments of synthesized music-like and speech-like noise of varying amplitude modulation rates and regularities.
Audio clips with noise only allowed detection of loudness and speed. Participants were asked to judge whether these ambiguous sound clips, which they thought were music or speech masked by noise, actually sounded like music or speech. Observing participants’ tendency to classify hundreds of sound clips as music or speech revealed the extent to which each feature of speed and/or regularity influenced their judgments between music and speech. This is the auditory version of “seeing a face in a cloud,” the scientists conclude: If there’s a certain feature in the sound wave that matches the listener’s idea of what a sound like music or speech should sound like, even a clip of white noise can sound like music or speech. . Examples of music and speech can be downloaded from search page.
The results showed that our auditory system uses surprisingly simple and basic acoustic parameters to distinguish music from speech: for participants, clips with slower frequencies (<2 Hz) and smoother amplitude modulation sounded more similar to music, while clips with higher frequencies (~4 Hz) and more irregular amplitude modulation sounded more like speech.
Implications for therapy and rehabilitation
Knowing how the human brain differentiates music from speech could potentially benefit people with hearing or language disorders such as aphasia, the authors note. Melodic intonation therapy, for example, is a promising approach to training people with aphasia to sing what they want to say, using their intact “musical mechanisms” to bypass impaired speech mechanisms. Therefore, knowing what makes music and speech similar or different in the brain can help design more effective rehabilitation programs.
Other authors of the paper were Xiangbin Teng of the Chinese University of Hong Kong, M. Florencia Assaneo of the National Autonomous University of Mexico (UNAM), and David Poeppel, professor in the Department of Psychology at NYU and director of the Ernst Strüngmann Institute for Neuroscience in Frankfurt, Germany.
The research was funded by a grant from the National Institute on Deafness and Other Communication Disorders, part National Institutes of Health (F32DC018205) and Leon Levy Fellowships in Neuroscience.