This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 105
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
151671-Thumbnail Image.png
Description
Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does

Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does not presently understand. Chronic fatigue, poor working memory, impaired self-awareness, and lack of attention to task are symptoms commonly present post-concussion. Currently, there is not a standard method of assessing concussion, nor is there a way to track an individual's recovery, resulting in misguided treatment for better prognosis. The aim of the following study was to determine patient specific higher-order cognitive processing deficits for clinical diagnosis and prognosis of concussion. Six individuals (N=6) were seen during the acute phase of concussion, two of whom were seen subsequently when their symptoms were deemed clinically resolved. Subjective information was collected from both the patient and from neurology testing. Each individual completed a task, in which they were presented with degraded speech, taxing their higher-order cognitive processing. Patient specific behavioral patterns are noted, creating a unique paradigm for mapping subjective and objective data for each patient's strategy to compensate for deficits and understand speech in a difficult listening situation. Keywords: concussion, cognitive processing
ContributorsBerg, Dena (Author) / Liss, Julie M (Committee member) / Azuma, Tamiko (Committee member) / Caviness, John (Committee member) / Arizona State University (Publisher)
Created2013
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
153545-Thumbnail Image.png
Description
For decades, microelectronics manufacturing has been concerned with failures related to electromigration phenomena in conductors experiencing high current densities. The influence of interconnect microstructure on device failures related to electromigration in BGA and flip chip solder interconnects has become a significant interest with reduced individual solder interconnect volumes. A survey

For decades, microelectronics manufacturing has been concerned with failures related to electromigration phenomena in conductors experiencing high current densities. The influence of interconnect microstructure on device failures related to electromigration in BGA and flip chip solder interconnects has become a significant interest with reduced individual solder interconnect volumes. A survey indicates that x-ray computed micro-tomography (µXCT) is an emerging, novel means for characterizing the microstructures' role in governing electromigration failures. This work details the design and construction of a lab-scale µXCT system to characterize electromigration in the Sn-0.7Cu lead-free solder system by leveraging in situ imaging.

In order to enhance the attenuation contrast observed in multi-phase material systems, a modeling approach has been developed to predict settings for the controllable imaging parameters which yield relatively high detection rates over the range of x-ray energies for which maximum attenuation contrast is expected in the polychromatic x-ray imaging system. In order to develop this predictive tool, a model has been constructed for the Bremsstrahlung spectrum of an x-ray tube, and calculations for the detector's efficiency over the relevant range of x-ray energies have been made, and the product of emitted and detected spectra has been used to calculate the effective x-ray imaging spectrum. An approach has also been established for filtering `zinger' noise in x-ray radiographs, which has proven problematic at high x-ray energies used for solder imaging. The performance of this filter has been compared with a known existing method and the results indicate a significant increase in the accuracy of zinger filtered radiographs.

The obtained results indicate the conception of a powerful means for the study of failure causing processes in solder systems used as interconnects in microelectronic packaging devices. These results include the volumetric quantification of parameters which are indicative of both electromigration tolerance of solders and the dominant mechanisms for atomic migration in response to current stressing. This work is aimed to further the community's understanding of failure-causing electromigration processes in industrially relevant material systems for microelectronic interconnect applications and to advance the capability of available characterization techniques for their interrogation.
ContributorsMertens, James Charles Edwin (Author) / Chawla, Nikhilesh (Thesis advisor) / Alford, Terry (Committee member) / Jiao, Yang (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2015
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012
150688-Thumbnail Image.png
Description
Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical

Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical OAE measurements, such as the restricted frequency range, low efficiency and inaccurate calibration. In this dissertation project, a new method of OAE measurement which used a swept tone to evoke the stimulus frequency OAEs (SFOAEs) was developed to overcome the limitations of current methods. In addition, an in-situ calibration was applied to equalize the spectral level of the swept-tone stimulus at the tympanic membrane (TM). With this method, SFOAEs could be recorded with high resolution over a wide frequency range within one or two minutes. Two experiments were conducted to verify the accuracy of the in-situ calibration and to test the performance of the swept-tone SFOAEs. In experiment I, the calibration of the TM sound pressure was verified in both acoustic cavities and real ears by using a second probe microphone. In addition, the benefits of the in-situ calibration were investigated by measuring OAEs under different calibration conditions. Results showed that the TM pressure could be predicted correctly, and the in-situ calibration provided the most reliable results in OAE measurements. In experiment II, a three-interval paradigm with a tracking-filter technique was used to record the swept-tone SFOAEs in 20 normal-hearing subjects. The test-retest reliability of the swept-tone SFOAEs was examined using a repeated-measure design under various stimulus levels and durations. The accuracy of the swept-tone method was evaluated by comparisons with a standard method using discrete pure tones. Results showed that SFOAEs could be reliably and accurately measured with the swept-tone method. Comparing with the pure-tone approach, the swept-tone method showed significantly improved efficiency. The swept-tone SFOAEs with in-situ calibration may be an alternative of current clinical OAE measurements for more detailed evaluation of inner ear function and accurate diagnosis.
ContributorsChen, Shixiong (Author) / Bian, Lin (Thesis advisor) / Yost, William (Committee member) / Azuma, Tamiko (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2012