Matching Items (33)
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
133014-Thumbnail Image.png
Description
Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies).

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.
ContributorsDaugherty, Allison (Author) / Daliri, Ayoub (Thesis director) / Rogalsky, Corianne (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134926-Thumbnail Image.png
Description
The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has explored the neural correlations of hemispheric lateralization and phonemic awareness in dyslexia. The present study investigated the underlying neurobiology of five adults with dyslexia compared to age- and sex-matched control subjects using structural and functional magnetic resonance imaging. All subjects completed a large battery of behavioral tasks as part of a larger study and underwent functional and structural MRI acquisition. This data was collected and preprocessed at the University of Washington. Analyses focused on examining the neural correlates of hemispheric lateralization, letter reversal mistakes, reduced processing speed, and phonemic awareness. There were no significant findings of hemispheric differences between subjects with dyslexia and controls. The subject making the largest amount of letter reversal errors had deactivation in their cerebellum during the fMRI language task. Cerebellar white matter volume and surface area of the premotor cortex was the largest in the individual with the slowest reaction time to tapping. Phonemic decoding efficiency had a high correlation with neural activation in the primary motor cortex during the fMRI motor task (r=0.6). Findings from the present study suggest that brain regions utilized during motor control, such as the cerebellum, premotor cortex, and primary motor cortex, may have a larger role in dyslexia then previously considered. Future studies are needed to further distinguish the role of the cerebellum and other motor regions in relation to motor control and language processing deficits related to dyslexia.
ContributorsHoulihan, Chloe Carissa Prince (Author) / Rogalsky, Corianne (Thesis director) / Peter, Beate (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134813-Thumbnail Image.png
Description
Music is part of cultures all over the world and is entrenched in our daily lives, and yet little is known about the neural pathways responsible for how we perceive music. The property of "dissonance" is central to our understanding of the emotional meaning in music, and this study is

Music is part of cultures all over the world and is entrenched in our daily lives, and yet little is known about the neural pathways responsible for how we perceive music. The property of "dissonance" is central to our understanding of the emotional meaning in music, and this study is a preliminary step in understanding how this property of music is perceived. Twenty-four participants with normal hearing listened to melodies and ranked their degrees of dissonance. Melodies that are categorized as "dissonant" according to Western music theory were ranked as more "dissonant" to a significant degree across the 9 conditions (3 conditions of scale: Major, Neapolitan Minor, and Oriental; 3 conditions of wrong notes: no wrong notes, diatonic wrong notes, and non-diatonic wrong notes). As expected, the familiar Major scale was identified as more consonant across all wrong note conditions than the other scales. Notably, a significant interaction was found, with diatonic and non-diatonic notes not perceived differently in both of the unfamiliar scales, Neapolitan and Oriental. This study suggests that the context of musical scale does influence how we create expectations of music and perceive dissonance. Future studies are necessary to understand the mechanisms by which scales drive these expectations.
ContributorsBlumenstein, Nicole Rose (Author) / Rogalsky, Corianne (Thesis director) / Peter, Beate (Committee member) / FitzPatrick, Carole (Committee member) / School of Music (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
ContributorsHebert, Katherine P (Author) / Goldinger, Stephen D (Thesis advisor) / Rogalsky, Corianne (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2016
155273-Thumbnail Image.png
Description
Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior

Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior temporal sulcus (STS) as the center for AV integration, while others suggest inferior frontal and motor regions. However, few studies have analyzed the effect of stroke or other brain damage on multisensory integration in humans. The present study examines the effect of lesion location on auditory and AV speech perception through behavioral and structural imaging methodologies in 41 left-hemisphere participants with chronic focal cerebral damage. Participants completed two behavioral tasks of speech perception: an auditory speech perception task and a classic McGurk paradigm measuring congruent (auditory and visual stimuli match) and incongruent (auditory and visual stimuli do not match, creating a “fused” percept of a novel stimulus) AV speech perception. Overall, participants performed well above chance on both tasks. Voxel-based lesion symptom mapping (VLSM) across all 41 participants identified several regions as critical for speech perception depending on trial type. Heschl’s gyrus and the supramarginal gyrus were identified as critical for auditory speech perception, the basal ganglia was critical for speech perception in AV congruent trials, and the middle temporal gyrus/STS were critical in AV incongruent trials. VLSM analyses of the AV incongruent trials were used to further clarify the origin of “errors”, i.e. lack of fusion. Auditory capture (auditory stimulus) responses were attributed to visual processing deficits caused by lesions in the posterior temporal lobe, whereas visual capture (visual stimulus) responses were attributed to lesions in the anterior temporal cortex, including the temporal pole, which is widely considered to be an amodal semantic hub. The implication of anterior temporal regions in AV integration is novel and warrants further study. The behavioral and VLSM results are discussed in relation to previous neuroimaging and case-study evidence; broadly, our findings coincide with previous work indicating that multisensory superior temporal cortex, not frontal motor circuits, are critical for AV integration.
ContributorsCai, Julia (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2017
135887-Thumbnail Image.png
Description
Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a single performance monitoring mechanism, housed in the anterior cingulate cortex,

Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a single performance monitoring mechanism, housed in the anterior cingulate cortex, recruits executive functions for top-down control. This top-down control manifests as trial-to-trial micro adjustments to the speed and accuracy of responses. If these effects are produced by a single performance monitoring mechanism, then the size of these sequential trial-to-trial effects should be correlated across tasks. To this end, we conducted a large-scale (N=125) individual differences experiment to examine whether two sequential effects - the Gratton effect and error-related slowing effect - are correlated across a Simon, Flanker, and Stroop task. We find weak correlations for these effects across tasks which is inconsistent with single-process accounts.
ContributorsWhitehead, Peter Stefan Sekerere (Author) / Brewer, Gene (Thesis director) / Blais, Chris (Committee member) / Rogalsky, Corianne (Committee member) / Department of Psychology (Contributor) / School of Music (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12