This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 20
Filtering by

Clear all filters

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
150207-Thumbnail Image.png
Description
Fibromyalgia (FM) is a chronic musculoskeletal disorder characterized by widespread pain, fatigue, and a variety of other comorbid physiological and psychological characteristics, including a deficit of positive affect. Recently, the focus of research on the pathophysiology of FM has considered the role of a number of genomic variants. In the

Fibromyalgia (FM) is a chronic musculoskeletal disorder characterized by widespread pain, fatigue, and a variety of other comorbid physiological and psychological characteristics, including a deficit of positive affect. Recently, the focus of research on the pathophysiology of FM has considered the role of a number of genomic variants. In the current manuscript, case-control analyses did not support the hypothesis that FM patients would differ from other chronic pain groups in catechol-O-methyltransferase (COMT) and mu-opioid receptor (OPRM1) genotype. However, evidence is provided in support of the hypothesis that functional single nucleotide polymorphisms on the COMT and OPRM1 genes would be associated with risk and resilience, respectively, in a dual processing model of pain-related positive affective regulation in FM. Forty-six female patients with a physician-confirmed diagnosis of FM completed an electronic diary that included once-daily assessments of positive affect and soft tissue pain. Multilevel modeling yielded a significant gene X environment interaction, such that individuals with met/met genotype on COMT experienced a greater decline in positive affect as daily pain increased than did either val/met or val/val individuals. A gene X environment interaction for OPRM1 also emerged, indicating that individuals with at least one asp allele were more resilient to elevations in daily pain than those homozygous for the asn allele. In sum, the findings offer researchers ample reason to further investigate the contribution of the catecholamine and opioid systems, and their associated genomic variants, to the still poorly understood experience of FM.
ContributorsFinan, Patrick Hamilton (Author) / Zautra, Alex (Thesis advisor) / Davis, Mary (Committee member) / Lemery-Chalfant, Kathryn (Committee member) / Presson, Clark (Committee member) / Arizona State University (Publisher)
Created2011
150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012
151001-Thumbnail Image.png
Description
In rehabilitation settings, activity limitation can be a significant barrier to recovery. This study sought to examine the effects of state and trait level benefit finding, positive affect, and catastrophizing on activity limitation among individuals with a physician-confirmed diagnosis of either Osteoarthritis (OA), Fibromyalgia (FM), or a dual diagnosis of

In rehabilitation settings, activity limitation can be a significant barrier to recovery. This study sought to examine the effects of state and trait level benefit finding, positive affect, and catastrophizing on activity limitation among individuals with a physician-confirmed diagnosis of either Osteoarthritis (OA), Fibromyalgia (FM), or a dual diagnosis of OA/FM. Participants (106 OA, 53 FM, and 101 OA/FM) who had no diagnosed autoimmune disorder, a pain rating above 20 on a 0-100 scale, and no involvement in litigation regarding their condition were recruited in the Phoenix metropolitan area for inclusion in the current study. After initial questionnaires were completed, participants were trained to complete daily diaries on a laptop computer and instructed to do so a half an hour before bed each night for 30 days. In each diary, participants rated their average daily pain, benefit finding, positive affect, catastrophizing, and activity limitation. A single item, "I thought about some of the good things that have come from living with my pain" was used to examine the broader construct of benefit finding. It was hypothesized that state and trait level benefit finding would have a direct relation with activity limitation and a partially mediated relationship, through positive affect. Multilevel modeling with SAS PROC MIXED revealed that benefit finding was not directly related to activity limitation. Increases in benefit finding were associated, however, with decreases in activity limitation through a significant mediated relationship with positive affect. Individuals who benefit find had a higher level of positive affect which was associated with decreased activity limitation. A suppression effect involving pain and benefit finding at the trait level was also found. Pain appeared to increase the predictive validity of the relation of benefit finding to activity limitation. These findings have important implications for rehabilitation psychologists and should embolden clinicians to encourage patients to increase positive affect by employing active approach-oriented coping strategies like benefit finding to reduce activity limitation.
ContributorsKinderdietz, Jeffrey Scott (Author) / Zautra, Alex (Thesis advisor) / Davis, Mary (Committee member) / Barrera, Manuel (Committee member) / Okun, Morris (Committee member) / Arizona State University (Publisher)
Created2012
150688-Thumbnail Image.png
Description
Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical

Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical OAE measurements, such as the restricted frequency range, low efficiency and inaccurate calibration. In this dissertation project, a new method of OAE measurement which used a swept tone to evoke the stimulus frequency OAEs (SFOAEs) was developed to overcome the limitations of current methods. In addition, an in-situ calibration was applied to equalize the spectral level of the swept-tone stimulus at the tympanic membrane (TM). With this method, SFOAEs could be recorded with high resolution over a wide frequency range within one or two minutes. Two experiments were conducted to verify the accuracy of the in-situ calibration and to test the performance of the swept-tone SFOAEs. In experiment I, the calibration of the TM sound pressure was verified in both acoustic cavities and real ears by using a second probe microphone. In addition, the benefits of the in-situ calibration were investigated by measuring OAEs under different calibration conditions. Results showed that the TM pressure could be predicted correctly, and the in-situ calibration provided the most reliable results in OAE measurements. In experiment II, a three-interval paradigm with a tracking-filter technique was used to record the swept-tone SFOAEs in 20 normal-hearing subjects. The test-retest reliability of the swept-tone SFOAEs was examined using a repeated-measure design under various stimulus levels and durations. The accuracy of the swept-tone method was evaluated by comparisons with a standard method using discrete pure tones. Results showed that SFOAEs could be reliably and accurately measured with the swept-tone method. Comparing with the pure-tone approach, the swept-tone method showed significantly improved efficiency. The swept-tone SFOAEs with in-situ calibration may be an alternative of current clinical OAE measurements for more detailed evaluation of inner ear function and accurate diagnosis.
ContributorsChen, Shixiong (Author) / Bian, Lin (Thesis advisor) / Yost, William (Committee member) / Azuma, Tamiko (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2012
157377-Thumbnail Image.png
Description
Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However,

Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However, the degree to which these networks can support comprehension remains unclear. Furthermore, how hearing loss may be related to the cognitive resources recruited during challenging speech comprehension is unknown. This dissertation investigated how hearing, cognitive performance, and functional brain networks contribute to challenging auditory speech comprehension in older adults. Experiment 1 characterized how age and hearing loss modulate resting-state functional connectivity between Heschl’s gyrus and several sensory and cognitive brain networks. The results indicate that older adults exhibit decreased functional connectivity between Heschl’s gyrus and sensory and attention networks compared to younger adults. Within older adults, greater hearing loss was associated with increased functional connectivity between right Heschl’s gyrus and the cingulo-opercular and language networks. Experiments 2 and 3 investigated how hearing, working memory, attentional control, and fMRI measures predict comprehension of complex sentence structures and speech in noisy environments. Experiment 2 utilized resting-state functional magnetic resonance imaging (fMRI) and behavioral measures of working memory and attentional control. Experiment 3 used activation-based fMRI to examine the brain regions recruited in response to sentences with both complex structures and in noisy background environments as a function of hearing and cognitive abilities. The results suggest that working memory abilities and the functionality of the frontoparietal and language networks support the comprehension of speech in multi-speaker environments. Conversely, attentional control and the cingulo-opercular network were shown to support comprehension of complex sentence structures. Hearing loss was shown to decrease activation within right Heschl’s gyrus in response to all sentence conditions and increase activation within frontoparietal and cingulo-opercular regions. Hearing loss also was associated with poorer sentence comprehension in energetic, but not informational, masking. Together, these three experiments identify the unique contributions of cognition and brain networks that support challenging auditory speech comprehension in older adults, further probing how hearing loss affects these relationships.
ContributorsFitzhugh, Megan (Author) / (Reddy) Rogalsky, Corianne (Thesis advisor) / Baxter, Leslie C (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, Blair (Committee member) / Arizona State University (Publisher)
Created2019
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019