Matching Items (14)
Filtering by

Clear all filters

152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
ContributorsTat, Michael J (Author) / Azuma, Tamiko (Thesis advisor) / Goldinger, Stephen D (Committee member) / Liss, Julie M (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
157377-Thumbnail Image.png
Description
Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However,

Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However, the degree to which these networks can support comprehension remains unclear. Furthermore, how hearing loss may be related to the cognitive resources recruited during challenging speech comprehension is unknown. This dissertation investigated how hearing, cognitive performance, and functional brain networks contribute to challenging auditory speech comprehension in older adults. Experiment 1 characterized how age and hearing loss modulate resting-state functional connectivity between Heschl’s gyrus and several sensory and cognitive brain networks. The results indicate that older adults exhibit decreased functional connectivity between Heschl’s gyrus and sensory and attention networks compared to younger adults. Within older adults, greater hearing loss was associated with increased functional connectivity between right Heschl’s gyrus and the cingulo-opercular and language networks. Experiments 2 and 3 investigated how hearing, working memory, attentional control, and fMRI measures predict comprehension of complex sentence structures and speech in noisy environments. Experiment 2 utilized resting-state functional magnetic resonance imaging (fMRI) and behavioral measures of working memory and attentional control. Experiment 3 used activation-based fMRI to examine the brain regions recruited in response to sentences with both complex structures and in noisy background environments as a function of hearing and cognitive abilities. The results suggest that working memory abilities and the functionality of the frontoparietal and language networks support the comprehension of speech in multi-speaker environments. Conversely, attentional control and the cingulo-opercular network were shown to support comprehension of complex sentence structures. Hearing loss was shown to decrease activation within right Heschl’s gyrus in response to all sentence conditions and increase activation within frontoparietal and cingulo-opercular regions. Hearing loss also was associated with poorer sentence comprehension in energetic, but not informational, masking. Together, these three experiments identify the unique contributions of cognition and brain networks that support challenging auditory speech comprehension in older adults, further probing how hearing loss affects these relationships.
ContributorsFitzhugh, Megan (Author) / (Reddy) Rogalsky, Corianne (Thesis advisor) / Baxter, Leslie C (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, Blair (Committee member) / Arizona State University (Publisher)
Created2019
157392-Thumbnail Image.png
Description
With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender ga

With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender gap is closing due to more advanced screening and a better understanding of how females with ASD present their symptoms. Little research has been published on the neurocognitive differences that exist between older adults with ASD compared to neurotypical (NT) counterparts, and nothing has specifically addressed older women with ASD. This study utilized neuroimaging and neuropsychological tests to examine differences between diagnosis and sex of four distinct groups: older men with ASD, older women with ASD, older NT men, and older NT women. In each group, hippocampal size (via FreeSurfer) was analyzed for differences as well as correlations with neuropsychological tests. Participants (ASD Female, n = 12; NT Female, n = 14; ASD Male, n = 30; NT Male = 22), were similar according to age, IQ, and education. The results of the study indicated that the ASD Group as a whole performed worse on executive functioning tasks (Wisconsin Card Sorting Test, Trails Making Test) and memory-related tasks (Rey Auditory Verbal Learning Test, Weschler Memory Scale: Visual Reproduction) compared to the NT Group. Interactions of sex by diagnosis approached significance only within the WCST non-perseverative errors, with the women with ASD performing worse than NT women, but no group differences between men. Effect sizes between the female groups (ASD female vs. NT female) showed more than double that of the male groups (ASD male vs. NT male) for all WCST and AVLT measures. Participants with ASD had significantly smaller right hippocampal volumes than NT participants. In addition, all older women showed larger hippocampal volumes when corrected for total intracranial volume (TIV) compared to all older men. Overall, NT Females had significant correlations across all neuropsychological tests and their hippocampal volumes whereas no other group had significant correlations. These results suggest a tighter coupling between hippocampal size and cognition in NT Females than NT Males and both sexes with ASD. This study promotes further understanding of the neuropsychological differences between older men and women, both with and without ASD. Further research is needed on a larger sample of older women with and without ASD.
ContributorsWebb, Christen Len (Author) / Braden, B. Blair (Thesis advisor) / Azuma, Tamiko (Committee member) / Dixon, Maria (Committee member) / Arizona State University (Publisher)
Created2019
135362-Thumbnail Image.png
Description
An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure to unexpected situations, the veteran population is at a higher

An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure to unexpected situations, the veteran population is at a higher risk for traumatic brain injury (TBI), Post Traumatic Stress Disorder (PTSD), and depression. All of these conditions are associated with cognitive consequences, including attention deficits, working memory problems, and episodic memory impairments. Some conditions, particularly mild TBI, are not diagnosed or treated until long after the injury when the person realizes they have cognitive difficulties. Even mild cognitive problems can hinder learning in an academic setting, but there is little data on the frequency and severity of cognitive deficits in veteran college students. The current study examines self-reported cognitive symptoms in veteran students compared to civilian students and how those symptoms relate to service-related conditions. A better understanding of the pattern of self-reported symptoms will help researchers and clinicians determine the veterans who are at higher risk for cognitive and academic difficulties.
ContributorsAllen, Kelly Anne (Author) / Azuma, Tamiko (Thesis director) / Gallagher, Karen (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
155273-Thumbnail Image.png
Description
Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior

Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior temporal sulcus (STS) as the center for AV integration, while others suggest inferior frontal and motor regions. However, few studies have analyzed the effect of stroke or other brain damage on multisensory integration in humans. The present study examines the effect of lesion location on auditory and AV speech perception through behavioral and structural imaging methodologies in 41 left-hemisphere participants with chronic focal cerebral damage. Participants completed two behavioral tasks of speech perception: an auditory speech perception task and a classic McGurk paradigm measuring congruent (auditory and visual stimuli match) and incongruent (auditory and visual stimuli do not match, creating a “fused” percept of a novel stimulus) AV speech perception. Overall, participants performed well above chance on both tasks. Voxel-based lesion symptom mapping (VLSM) across all 41 participants identified several regions as critical for speech perception depending on trial type. Heschl’s gyrus and the supramarginal gyrus were identified as critical for auditory speech perception, the basal ganglia was critical for speech perception in AV congruent trials, and the middle temporal gyrus/STS were critical in AV incongruent trials. VLSM analyses of the AV incongruent trials were used to further clarify the origin of “errors”, i.e. lack of fusion. Auditory capture (auditory stimulus) responses were attributed to visual processing deficits caused by lesions in the posterior temporal lobe, whereas visual capture (visual stimulus) responses were attributed to lesions in the anterior temporal cortex, including the temporal pole, which is widely considered to be an amodal semantic hub. The implication of anterior temporal regions in AV integration is novel and warrants further study. The behavioral and VLSM results are discussed in relation to previous neuroimaging and case-study evidence; broadly, our findings coincide with previous work indicating that multisensory superior temporal cortex, not frontal motor circuits, are critical for AV integration.
ContributorsCai, Julia (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2017