Matching Items (2)
Filtering by

Clear all filters

153939-Thumbnail Image.png
Description
Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.
ContributorsMontagne, Christopher (Author) / Zhou, Yi (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Yost, William A. (Committee member) / Arizona State University (Publisher)
Created2015
190959-Thumbnail Image.png
Description
The propagation of waves in solids, especially when characterized by dispersion, remains a topic of profound interest in the field of signal processing. Dispersion represents a phenomenon where wave speed becomes a function of frequency and results in multiple oscillatory modes. Such signals find application in structural healthmonitoring for identifying

The propagation of waves in solids, especially when characterized by dispersion, remains a topic of profound interest in the field of signal processing. Dispersion represents a phenomenon where wave speed becomes a function of frequency and results in multiple oscillatory modes. Such signals find application in structural healthmonitoring for identifying potential damage sensitive features in complex materials. Consequently, it becomes important to find matched time-frequency representations for characterizing the properties of the multiple frequency-dependent modes of propagation in dispersive material. Various time-frequency representations have been used for dispersive signal analysis. However, some of them suffered from poor timefrequency localization or were designed to match only specific dispersion modes with known characteristics, or could not reconstruct individual dispersive modes. This thesis proposes a new time-frequency representation, the nonlinear synchrosqueezing transform (NSST) that is designed to offer high localization to signals with nonlinear time-frequency group delay signatures. The NSST follows the technique used by reassignment and synchrosqueezing methods to reassign time-frequency points of the short-time Fourier transform and wavelet transform to specific localized regions in the time-frequency plane. As the NSST is designed to match signals with third order polynomial phase functions in the frequency domain, we derive matched group delay estimators for the time-frequency point reassignment. This leads to a highly localized representation for nonlinear time-frequency characteristics that also allow for the reconstruction of individual dispersive modes from multicomponent signals. For the reconstruction process, we propose a novel unsupervised learning approach that does not require prior information on the variation or number of modes in the signal. We also propose a Bayesian group delay mode merging approach for reconstructing modes that overlap in time and frequency. In addition to using simulated signals, we demonstrate the performance of the new NSST, together with mode extraction, using real experimental data of ultrasonic guided waves propagating through a composite plate.
ContributorsIkram, Javaid (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chattopadhyay, Aditi (Thesis advisor) / Bertoni, Mariana (Committee member) / Sinha, Kanu (Committee member) / Arizona State University (Publisher)
Created2023