Filtering by
- Creators: Arizona State University
- Creators: Daliri, Ayoub
- Status: Published
Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.
Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict language outcome (Lazar et al, 2008). It is also important to note that adequate integration between the sensory and motor systems is critical for many aspects of fluent speech and correcting speech errors. The present study seeks to investigate how delayed auditory-feedback paradigms, which alter the time scale of sensorimotor interactions in speech, might be useful in characterizing the speech production impairments in individuals with aphasia. To this end, six individuals with aphasia and nine age-matched control subjects were introduced to delayed auditory feedback at 4 different intervals during a sentence reading task. Our study found that the aphasia group generated more errors in 3 out of the 4 linguistic categories measured across all delay lengths, but that there was no significant main effect delay or interaction between group and delay. Acoustic analyses revealed variability among scores within the control and aphasia groups on all phoneme types. For example, acoustic analyses highlighted how the individual with conduction aphasia showed significantly longer amplitudes at all delays, and significantly larger duration at no delay, but that significance diminished as delay periods increased. Overall, this study suggests that delayed auditory feedback’s effects vary across individuals with aphasia and provides a base of research to be further built on by future testing of individuals with varying aphasia types and levels of severity.