Matching Items (3)
Filtering by

Clear all filters

137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
155613-Thumbnail Image.png
Description
The purpose of this study was to identify acoustic markers that correlate with accurate and inaccurate /r/ production in children ages 5-8 using signal processing. In addition, the researcher aimed to identify predictive acoustic markers that relate to changes in /r/ accuracy. A total of 35 children (23 accurate, 12

The purpose of this study was to identify acoustic markers that correlate with accurate and inaccurate /r/ production in children ages 5-8 using signal processing. In addition, the researcher aimed to identify predictive acoustic markers that relate to changes in /r/ accuracy. A total of 35 children (23 accurate, 12 inaccurate, 8 longitudinal) were recorded. Computerized stimuli were presented on a PC laptop computer and the children were asked to do five tasks to elicit spontaneous and imitated /r/ production in all positions. Files were edited and analyzed using a filter bank approach centered at 40 frequencies based on the Mel-scale. T-tests were used to compare spectral energy of tokens between accurate and inaccurate groups and additional t-tests were used to compare duration of accurate and inaccurate files. Results included significant differences between the accurate and inaccurate productions of /r/, notable differences in the 24-26 mel bin range, and longer duration of inaccurate /r/ than accurate. Signal processing successfully identified acoustic features of accurate and inaccurate production of /r/ and candidate predictive markers that may be associated with acquisition of /r/.
ContributorsBecvar, Brittany Patricia (Author) / Azuma, Tamiko (Thesis advisor) / Weinhold, Juliet (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2017
135494-Thumbnail Image.png
Description
Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.
ContributorsBalderas, Leslie Ann (Author) / Zhou, Yi (Thesis director) / Yost, William (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05