Filtering by
- All Subjects: Neurosciences
- All Subjects: aphasia
- All Subjects: fMRI
- Creators: Rogalsky, Corianne
- Resource Type: Text
Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
The neurobiology of sentence comprehension: an fMRI study of late American Sign Language acquisition
The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.
Aphasia is an impairment that affects many different aspects of language and makes it more difficult for a person to communicate with those around them. Treatment for aphasia is often administered by a speech-language pathologist in a clinical setting, but researchers have recently begun exploring the potential of virtual reality (VR) interventions. VR provides an immersive environment and can allow multiple users to interact with digitized content. This exploratory paper proposes the design of a VR rehabilitation game –called Pact– for adults with aphasia that aims to improve the word-finding and picture-naming abilities of users to improve communication skills. Additionally, a study is proposed that will assess how well Pact improves the word-finding and picture-naming abilities of users when it is used in conjunction with speech therapy. If the results of the study show an increase in word-finding and picture-naming scores compared to the control group (patients receiving traditional speech therapy alone), the results would indicate that Pact can achieve its goal of promoting improvement in these domains. There is a further need to examine VR interventions for aphasia, particularly with larger sample sizes that explore the gains associated with or design issues associated with multi-user VR programs.