This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 4 of 4
Filtering by

Clear all filters

Description
To localize different sound sources in an environment, the auditory system analyzes acoustic properties of sounds reaching the ears to determine the exact location of a sound source. Successful sound localization is important for improving signal detection and speech intelligibility in a noisy environment. Sound localization is not a uni-sensory

To localize different sound sources in an environment, the auditory system analyzes acoustic properties of sounds reaching the ears to determine the exact location of a sound source. Successful sound localization is important for improving signal detection and speech intelligibility in a noisy environment. Sound localization is not a uni-sensory experience, and can be influenced by visual information (e.g., the ventriloquist effect). Vision provides contexts and organizes the auditory space for the auditory system. This investigation tracks eye movement of human subjects using a non-invasive eye-tracking system and evaluates the impact of visual stimulation on localization of a phantom sound source generated through timing-based stereophony. It was hypothesized that gaze movement could reveal the way in which visual stimulation (LED lights) shifts the perception of a sound source. However, the results show that subjects do not always move their gaze towards the light direction even when they experience strong visual capture. On average, the gaze direction indicates the perceived sound location with and without light stimulation.
ContributorsFlores, Nancy Gloria (Author) / Zhou, Yi (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
168345-Thumbnail Image.png
Description
Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in communication and the overall aesthetic experience of life, especially in

Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in communication and the overall aesthetic experience of life, especially in noisy or reverberant environments. This deficit occurs as hearing impairment reduces the signal strength needed for auditory spatial processing and changes how auditory information is combined with other sensory inputs (e.g., vision). The influence of multisensory processing on spatial awareness in listeners with normal, and impaired hearing is not assessed in clinical evaluations, and patients’ everyday sensory experiences are currently not directly measurable. This dissertation investigated the role of vision in auditory localization in listeners with normal, and impaired hearing in a naturalistic stimulus setting, using natural gaze orienting responses. Experiments examined two behavioral outcomes—response accuracy and response time—based on eye movement in response to simultaneously presented auditory and visual stimuli. The first set of experiments examined the effects of stimulus spatial saliency on response accuracy and response time and the extent of visual dominance in both metrics in auditory localization. The results indicate that vision can significantly influence both the speed and accuracy of auditory localization, especially when auditory stimuli are more ambiguous. The influence of vision is shown for both normal hearing- and hearing-impaired listeners. The second set of experiments examined the effect of frontal visual stimulation on localizing an auditory target presented from in front of or behind a listener. The results show domain-specific effects of visual capture on both response time and response accuracy. These results support previous findings that auditory-visual interactions are not limited by the spatial rule of proximity. These results further suggest the strong influence of vision on both the processing and the decision-making stages of sound source localization for both listeners with normal, and impaired hearing.
ContributorsClayton, Colton (Author) / Zhou, Yi (Thesis advisor) / Azuma, Tamiko (Committee member) / Daliri, Ayoub (Committee member) / Arizona State University (Publisher)
Created2021
161435-Thumbnail Image.png
Description
This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the SOC. The simulator was fed either BBs or MBs stimuli

This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the SOC. The simulator was fed either BBs or MBs stimuli to compare the SOC response. This was done for different frequencies at twenty, forty, and sixty hertz for comparison of the SOC response envelopes. A correlation between the SOC response envelopes for both types of beats and the waveform resulting from adding two tones together was completed. The highest correlation for BBs was found to be forty hertz and for MBs it was sixty hertz. A Fast Fourier Transform (FFT) was also completed on the stimulus envelope and the SOC response envelopes. The FFT was able to show that within the BBs presentation the envelopes of the original stimuli showed no difference frequency. However, the difference frequency was present in the binaural SOC response envelope. For the MBs, the difference frequency was present within the stimulus and the monaural SOC response envelope.
ContributorsCrawford, Taylor Janay (Author) / Brewer, Gene (Thesis advisor) / Zhou, Yi (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2021
193496-Thumbnail Image.png
Description
Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the weighting of different acoustic cues for pitch-related listening tasks such

Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the weighting of different acoustic cues for pitch-related listening tasks such as Mandarin tone recognition. While auditory training is known to improve CI users’ performance in these tasks as measured by percent correct scores, the effects of training on cue weighting, listening effort, and untrained tasks need to be better understood, in order to maximize the training benefits. This dissertation addressed these questions by training normal-hearing (NH) listeners listening to CI simulation. Study 1 examined whether Mandarin tone recognition training with enhanced amplitude envelope cues may improve tone recognition scores and increase the weighting of amplitude envelope cues over fundamental frequency (F0) contours. Compared to no training or natural-amplitude-envelope training, enhanced-amplitude-envelope training increased the benefits of amplitude envelope enhancement for tone recognition but did not increase the weighting of amplitude or F0 cues. Listeners attending more to amplitude envelope cues in the pre-test improved more in tone recognition after enhanced-amplitude-envelope training. Study 2 extended Study 1 to compare the generalization effects of tone recognition training alone, vowel recognition training alone, and combined tone and vowel recognition training. The results showed that tone recognition training did not improve vowel recognition or vice versa, although tones and vowels are always produced together in Mandarin. Only combined tone and vowel recognition training improved sentence recognition, showing that both suprasegmental (i.e., tones) and segmental cues (i.e., vowels) were essential for sentence recognition in Mandarin. Study 3 investigated the impact of phoneme recognition training on listening effort of sentence recognition in noise, as measured by a dual-task paradigm, pupillometry, and subjective ratings. It was found that phoneme recognition training improved sentence recognition in noise. The dual-task paradigm and pupillometry indicated that from pre-test to post-test, listening effort reduced in the control group without training, but remained unchanged in the training group. This suggests that training may have motivated listeners to stay focused on the challenging task of sentence recognition in noise. Overall, non-clinical measures such as cue weighting and listening effort can enrich our understanding of the training-induced perceptual and cognitive effects, and allow us to better predict and assess the training outcomes.
ContributorsKim, Seeon (Author) / Luo, Xin (Thesis advisor) / Azuma, Tamiko (Committee member) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2024