The ability to produce or understand speech is affected in a third of strokes.
Damage to the ability to speak or understand speech—a condition called aphasia—is one of the most common aftereffects of a stroke, impacting roughly a third of patients.1 Recovery is possible but usually requires speech and language therapy.
While the impact of stroke on speech is well understood, the changes in brain activity that underlie these deficits are still unclear. This knowledge gap encouraged researchers at Stanford University and KU Leuven to explore speech function in healthy volunteers and stroke patients in a new paper.2 The research was published in The Journal of Neuroscience.
The new work, led by first author and Stanford University psychologist Jill Kries, demonstrated that stroke patients with aphasia struggle not with hearing words, but with processing how individual speech sounds combine to impart meaning. They also found that patients’ brains did not process speech sounds long enough to fully understand unclear words. The authors hope that their findings will improve post-stroke diagnostics by identifying relevant brain activity signals.
A Simple Story to Detect Stroke Damage
The researchers recruited 39 people with post-stroke aphasia and 24 healthy volunteers, who had been matched for age. The team asked the participants to listen to 25 minutes of spoken stories while wearing an EEG cap studded with electrodes to record their brain activity.
While the study protocol was simple and quick for their participants, the complex analysis happened behind the scenes. The team broke down their stories into individual units of sound, called phonemes, that the brain uses to understand meaning. These include velar sounds made using the back of the tongue, like the “ng” sound in “sing,” and fricatives, which are made by forcing air through a narrow opening, like the buzzing “v” in “van.” The team first showed that all 18 of the phonemes they examined were encoded in brain activity detectable from participants’ EEG caps.
Next, they looked at how people affected by stroke processed these different phonemes. They found that initial processing, in the first 80 or so milliseconds after hearing a phoneme, was unaffected in people with aphasia. But in the key later processing step, between 80 and 250 milliseconds after hearing a phoneme, their processing became weakened. The authors observed that stroke patients’ brain activity became more diffuse and less concentrated, resembling an exaggerated version of processes that occur during normal aging.
Importantly, they noted that people with aphasia could separate individual phonemes just as well as healthy participants, suggesting that the previously proposed hypothesis that aphasia results from the brain mixing sounds together is incorrect.
The Aphasic Brain Struggles with Uncertainty
The team also noted that the brains of people with aphasia handled uncertainty less well. When a word has high “lexical entropy,” it means that the phonemes involved could easily be misheard as other words that share those same sounds. The researchers found that their healthy volunteers’ brains spent longer encoding these uncertain words. In comparison, the brains of people with aphasia did not adapt to spend longer processing uncertain words. This rigidity, said the authors in a press release, likely impairs stroke patients’ ability to understand difficult-to-detect words.
In sum, the researchers suggest their findings show that aphasia doesn’t slow the brain’s understanding of speech so much as it affects the strength with which it encodes individual sounds. While a small sample size limits the study, the authors said it was encouraging that a straightforward protocol could produce useful results that point neuroscience toward a better understanding of how stroke alters the brain.
