Matching Items (70)
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019
133900-Thumbnail Image.png
Description
22q11.2 Deletion Syndrome (22q11.2DS) is one of the most frequent chromosomal microdeletion syndromes in humans. This case study focuses on the language and reading profile of a female adult with 22q11.2 Deletion Syndrome who was undiagnosed until the age of 27 years old. To comprehensively describe the participant's profile, a

22q11.2 Deletion Syndrome (22q11.2DS) is one of the most frequent chromosomal microdeletion syndromes in humans. This case study focuses on the language and reading profile of a female adult with 22q11.2 Deletion Syndrome who was undiagnosed until the age of 27 years old. To comprehensively describe the participant's profile, a series of assessment measures was administered in the speech, language, cognition, reading, and motor domains. Understanding how 22q11.2DS has impacted the life of a recently diagnosed adult will provide insight into how to best facilitate long-term language and educational support for this population and inform future research.
ContributorsPhilp, Jennifer Lynn (Author) / Scherer, Nancy (Thesis director) / Peter, Beate (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133916-Thumbnail Image.png
Description
The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20

The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20 to 40 years old). Degree of hearing loss ranged from mild to moderately-severe as determined by bilateral pure-tone thresholds. Education levels ranged from some high school to graduate degrees. It was predicted that knowledge of new words would decrease with increasing hearing loss, whereas knowledge of old words would be unaffected. The Test of Contemporary Vocabulary (TCV) was developed for this study and contained words with old and new definitions. The vocabulary scores were subjected to repeated-measures ANOVA with definition type (old and new) as the within-subjects factor. Hearing level and education were between-subjects factors, while age was entered as a covariate. The results revealed no main effect of age or education level, while a significant main effect of hearing level was observed. Specifically, performance for new words decreased significantly as degree of hearing loss increased. A similar effect was not observed for old words. These results indicate that knowledge of new definitions is inversely related to degree of hearing loss.
ContributorsMarzan, Nicole Ann (Author) / Pittman, Andrea (Thesis director) / Azuma, Tamiko (Committee member) / Wexler, Kathryn (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135362-Thumbnail Image.png
Description
An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure to unexpected situations, the veteran population is at a higher

An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure to unexpected situations, the veteran population is at a higher risk for traumatic brain injury (TBI), Post Traumatic Stress Disorder (PTSD), and depression. All of these conditions are associated with cognitive consequences, including attention deficits, working memory problems, and episodic memory impairments. Some conditions, particularly mild TBI, are not diagnosed or treated until long after the injury when the person realizes they have cognitive difficulties. Even mild cognitive problems can hinder learning in an academic setting, but there is little data on the frequency and severity of cognitive deficits in veteran college students. The current study examines self-reported cognitive symptoms in veteran students compared to civilian students and how those symptoms relate to service-related conditions. A better understanding of the pattern of self-reported symptoms will help researchers and clinicians determine the veterans who are at higher risk for cognitive and academic difficulties.
ContributorsAllen, Kelly Anne (Author) / Azuma, Tamiko (Thesis director) / Gallagher, Karen (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135446-Thumbnail Image.png
Description
The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and orofacial myofunctional therapy (OMT). The interventions consist of stimulation of

The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and orofacial myofunctional therapy (OMT). The interventions consist of stimulation of the tongue by z-vibe and tongue pops. Changes in swallowing patterns are described, and similarities of interventions across the two professions are discussed. Ultrasound research in the realm of swallowing is sparse despite having potential clinical application in both professions. In using ultrasound, this study outlines a protocol for utilization of a hand-held probe and reinforces a particular protocol described in the literature. Real-time ultrasound recordings of swallows for 19 adult female subjects were made. Participants with orofacial myofunctional disorder are compared to a group with typical swallowing and differences in swallowing patterns are described. Three stages of the oral phase of the swallow were assigned based on ultrasonic observation of the tongue shape. Analysis involves total duration of the swallow, duration of the three stages in relation to the total duration of the swallow, and the number of swallows required for the bolus to be cleared from the oral cavity. No significant effects of either intervention were found. Swallowing patterns showed a general trend to become faster in total duration subsequent to each intervention. An unexpected finding showed significant changes in the relationship between the bolus preparation stage and the bolus transportation stage when comparing the group classified as having a single swallow and the group classified as having multiple swallows.
ContributorsMckay, Michelle Diane (Author) / Weinhold, Juliet (Thesis director) / Scherer, Nancy (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136828-Thumbnail Image.png
Description
This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and

This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and examined its effects after ten sessions. Results indicate that some children benefited from the intervention and there was variability across languages as well.
ContributorsFernandez, Olga E (Author) / Restrepo, Laida (Thesis director) / Mesa, Carol (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor)
Created2014-05
136325-Thumbnail Image.png
Description
Early childhood language environment has an important effect on developmental language outcomes. Intervention and parent training for children who have speech and language delays often focuses on the implementation of strategies designed to enhance the language environment. With quantitative information on different aspects of the language environment, intervention and parent

Early childhood language environment has an important effect on developmental language outcomes. Intervention and parent training for children who have speech and language delays often focuses on the implementation of strategies designed to enhance the language environment. With quantitative information on different aspects of the language environment, intervention and parent training can be better tailored to the needs of each child and can be made easier for parents to implement. This study uses the Language Environmental Analysis (LENA) system to explore differences in language environment across participants, settings (toddler group and home), and activities (general, outside, and organized playtime, story time, meal time, naptime, transition, public outside visits, travel time, TV time, personal care, and other). Participants were five children, ages 20-35 months who had speech and language delays. The children wore the LENA for one day and the adult words to the child, child vocalizations and turn-taking were analyzed during each activity and setting. We found that general and outside playtime activities, meal time, and personal care times were activities that consistently resulted in high levels of child vocalization across participants. Structured play and story time did not result in high levels of child vocalization. We also found that, for some children, there were differences in the quantity of adult language addressed to the child in language group and home settings. These findings have implications for training parents to provide language rich environments for their child.
ContributorsGlavee, Kelsey Marie (Author) / Scherer, Nancy (Thesis director) / Greer, Dawn (Committee member) / Bacon, Cathy (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of Psychology (Contributor)
Created2015-05