Matching Items (7)
Filtering by

Clear all filters

148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
134926-Thumbnail Image.png
Description
The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has explored the neural correlations of hemispheric lateralization and phonemic awareness in dyslexia. The present study investigated the underlying neurobiology of five adults with dyslexia compared to age- and sex-matched control subjects using structural and functional magnetic resonance imaging. All subjects completed a large battery of behavioral tasks as part of a larger study and underwent functional and structural MRI acquisition. This data was collected and preprocessed at the University of Washington. Analyses focused on examining the neural correlates of hemispheric lateralization, letter reversal mistakes, reduced processing speed, and phonemic awareness. There were no significant findings of hemispheric differences between subjects with dyslexia and controls. The subject making the largest amount of letter reversal errors had deactivation in their cerebellum during the fMRI language task. Cerebellar white matter volume and surface area of the premotor cortex was the largest in the individual with the slowest reaction time to tapping. Phonemic decoding efficiency had a high correlation with neural activation in the primary motor cortex during the fMRI motor task (r=0.6). Findings from the present study suggest that brain regions utilized during motor control, such as the cerebellum, premotor cortex, and primary motor cortex, may have a larger role in dyslexia then previously considered. Future studies are needed to further distinguish the role of the cerebellum and other motor regions in relation to motor control and language processing deficits related to dyslexia.
ContributorsHoulihan, Chloe Carissa Prince (Author) / Rogalsky, Corianne (Thesis director) / Peter, Beate (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135492-Thumbnail Image.png
Description
This pilot study evaluated whether Story Champs and Puente de Cuentos helped bilingual preschoolers increase their usage of emotional terms and ability to tell stories. Participants in this study included 10 Spanish-English bilingual preschoolers. Intervention was conducted in 9 sessions over 3 days using the Test of Narrative Retell to

This pilot study evaluated whether Story Champs and Puente de Cuentos helped bilingual preschoolers increase their usage of emotional terms and ability to tell stories. Participants in this study included 10 Spanish-English bilingual preschoolers. Intervention was conducted in 9 sessions over 3 days using the Test of Narrative Retell to measure results. Results did not find significant gains in either emotional term usage or ability to tell stories, but the results were promising as a pilot study.
ContributorsSato, Leslie Mariko (Author) / Restrepo, Maria (Thesis director) / Dixon, Maria (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

Aphasia is an impairment that affects many different aspects of language and makes it more difficult for a person to communicate with those around them. Treatment for aphasia is often administered by a speech-language pathologist in a clinical setting, but researchers have recently begun exploring the potential of virtual reality

Aphasia is an impairment that affects many different aspects of language and makes it more difficult for a person to communicate with those around them. Treatment for aphasia is often administered by a speech-language pathologist in a clinical setting, but researchers have recently begun exploring the potential of virtual reality (VR) interventions. VR provides an immersive environment and can allow multiple users to interact with digitized content. This exploratory paper proposes the design of a VR rehabilitation game –called Pact– for adults with aphasia that aims to improve the word-finding and picture-naming abilities of users to improve communication skills. Additionally, a study is proposed that will assess how well Pact improves the word-finding and picture-naming abilities of users when it is used in conjunction with speech therapy. If the results of the study show an increase in word-finding and picture-naming scores compared to the control group (patients receiving traditional speech therapy alone), the results would indicate that Pact can achieve its goal of promoting improvement in these domains. There is a further need to examine VR interventions for aphasia, particularly with larger sample sizes that explore the gains associated with or design issues associated with multi-user VR programs.

ContributorsGringorten, Rachel (Author) / Johnson, Mina (Thesis director) / Rogalsky, Corianne (Committee member) / English, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / College of Health Solutions (Contributor) / School of Music, Dance and Theatre (Contributor)
Created2023-05
165768-Thumbnail Image.png
Description

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict language outcome (Lazar et al, 2008). It is also important to note that adequate integration between the sensory and motor systems is critical for many aspects of fluent speech and correcting speech errors. The present study seeks to investigate how delayed auditory-feedback paradigms, which alter the time scale of sensorimotor interactions in speech, might be useful in characterizing the speech production impairments in individuals with aphasia. To this end, six individuals with aphasia and nine age-matched control subjects were introduced to delayed auditory feedback at 4 different intervals during a sentence reading task. Our study found that the aphasia group generated more errors in 3 out of the 4 linguistic categories measured across all delay lengths, but that there was no significant main effect delay or interaction between group and delay. Acoustic analyses revealed variability among scores within the control and aphasia groups on all phoneme types. For example, acoustic analyses highlighted how the individual with conduction aphasia showed significantly longer amplitudes at all delays, and significantly larger duration at no delay, but that significance diminished as delay periods increased. Overall, this study suggests that delayed auditory feedback’s effects vary across individuals with aphasia and provides a base of research to be further built on by future testing of individuals with varying aphasia types and levels of severity.

ContributorsPettijohn, Madilyn (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor)
Created2022-05
Description
The purpose of my study was to see if there were any significant relationships between performance in cognitive assessments and compensation in the altered auditory feedback paradigm in people with aphasia. Aphasia is a language disorder typically caused by a stroke in the left hemisphere. The cognitive assessments evaluated working

The purpose of my study was to see if there were any significant relationships between performance in cognitive assessments and compensation in the altered auditory feedback paradigm in people with aphasia. Aphasia is a language disorder typically caused by a stroke in the left hemisphere. The cognitive assessments evaluated working memory, processing speed, repetition, speech production, and speech comprehension. We hypothesized that those who did performed worse on cognitive assessments would have a lower magnitude compensation in the AAF paradigm. We found a significant relationship between the Digit Span Task performance and the AAF Sudden adaptation paradigm. Those who performed worse on the Digit Span Task had a lower magnitude compensation or compensated in the positive direction.
ContributorsUgarte, Isabelle (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor)
Created2024-05