Hebrew is a difficult language, claim new immigrants from English-speaking countries; English is difficult to learn and speak, say sabras who struggle with the ABCs; and both find it almost impossible to be fluent in Arabic. But it’s worth being able to speak any two languages or probably even three, according to two studies recently published in two different journals by researchers at the University of Haifa.
While bilingual speakers take longer to complete auditory tasks and do them with less accuracy, they are more cognitively agile than their monolingual counterparts and show more “brain plasticity” – the ability of the brain to modify its connections or rewire itself – than those who speak only one language.
What were the studies conducted?
In two studies of 59 and 60 normal-hearing adults between ages 19-35 years, researcher Dr. Hanin Karawani Khoury and her doctoral student Dana Bsharat-Maalouf documented the differences in perception and physiological reactions among native Arabic speakers who are fluent in Hebrew as a second language, and native Hebrew speakers.
Karawani and the team in her AudioNeuro Lab are investigating perceptual and neural processing of Arabic-Hebrew-English multilinguals and Hebrew-English bilinguals with the collaboration of Dr. Tamar Degani.
“The synchrony between neural and cognitive-perceptual measures makes the research unique and reveals direct brain-behavior links, serving as the basis for a fuller understanding of bilingual speech perception in challenging listening conditions,” Karawani said.
“Whereas the effects of bilingualism on speech perception in noise are widely studied, few studies to-date have compared bilingual-monolingual performance when all participants are operating in their dominant language," she noted. "This important aspect of the research will inform us whether bilinguals perform less well in challenging listening conditions, such as noisy environments due to reduced secondary proficiency, or whether increased competition due to language co-activation contributes to bilingual performance in noise,” she noted.
The first study that was conducted
One of their studies was published in PLOS ONE under the title “Bilinguals’ speech perception in noise: Perceptual and neural associations marks the first attempt to examine both perceptual and brain activity in bilingual populations.”
As such, researchers tested auditory brain stem responses and the perception of words and sentences to assess perceptual performance and to evaluate the relationship between what people hear and how the brain reacts when exposed to that stimulus. All testing was done in both quiet and noisy conditions.
This study examines the importance of using neural brain stem responses to speech sounds to differentiate individuals with different language histories and to explain inter-subject variability in bilinguals’ perceptual abilities in daily life situations.
Both groups – bilinguals and native Israeli Jews who speak only one language – performed better in quiet conditions as opposed to noisy ones. However, mixed results were observed among bilinguals in perceptual and physiological outcomes within noisy conditions. As for speech perception, bilinguals were significantly less accurate than their monolingual counterparts when attempting to decipher their second language. However, in neural responses, bilinguals demonstrated earlier auditory neural timing compared to monolinguals.
RESEARCHERS THEORIZE that bilinguals’ heightened performance brain activity may result in the enriched language environment that they cultivate for themselves to manage two linguistic systems. This may explain the earlier neural timing observed, as these listeners become faster at detecting the characteristics of speech stimuli.
These correlations advance the understanding of the neural processes underlying perception of speech among bilinguals, especially given that these correlations were not significant in the monolingual speaker group. It also suggests that sub-cortical processes could be one source that explains variability across bilingual individuals in daily challenging listening conditions.
It can thus be argued that bilinguals who tend to use more cortical resources in background noise may have more efficient activation and top-down processes, and consequently, their brain stem responses were found to be less susceptible to the effect of noise.
The second study that was conducted
A second study, published in the prestigious journal Cognition and under the title “Learning and bilingualism in challenging listening conditions: How challenging can it be?” follows a similar line of thought and examines during degraded listening conditions (speech in noise or scrambled speech) and quiet ones.
Like the previous study, this one also showed more sophisticated brain activity in the bilingual group as they fared much better in deciphering vocoded speech. These findings also suggest that bilinguals use a shared mechanism for speech processing under challenging listening conditions.
This coincides with the first study’s results, that show that noise had a relatively greater effect on bilinguals’ performance compared to the monolingual speakers’ group, even when tested in their dominant language. This is an innovative finding, since in previous studies, comparisons were limited to examining differences between the perceptual performance of bilinguals in their second language and monolingual speakers, without discussing what happens to bilinguals in their native language.
The authors suggested that in both studies, the results provide insights into the mechanisms that contribute to speech perceptual performance in challenging listening conditions and suggest that bilinguals’ language proficiency and age when they learned the language are not the only factors that affect performance. Rather, duration of exposure to languages and the ability to benefit from exposure to novel stimuli affect the perceptual performance of bilinguals, even when speaking their dominant language. “Our findings suggest that bilinguals use a shared mechanism for speech processing under challenging listening conditions.”