Matching Items (5)
Filtering by

Clear all filters

150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012
135768-Thumbnail Image.png
Description
One of the leading concerns regarding the commercial and military applications of rotary wing powered vehicles is the issue of blade-vortex interaction (BVI) noise occurring during forward descent. This impulsive noise-generating phenomenon occurs due to the close proximity and interference between the main rotor blades and the wake vortices generated

One of the leading concerns regarding the commercial and military applications of rotary wing powered vehicles is the issue of blade-vortex interaction (BVI) noise occurring during forward descent. This impulsive noise-generating phenomenon occurs due to the close proximity and interference between the main rotor blades and the wake vortices generated by the rotor blades from previous revolutions. Throughout the descent phase of a helicopter in forward flight, the rotating blades pass through these induced vortices, thus generating an impulsive "slap" noise that is characteristic of the common sound associated with helicopter flight among the general population. Therefore, parameterization of the variables of interest that affect BVI noise generation will allow for thorough analysis of the origins of the noise and open pathways for innovation that may offer significant improvements in acoustic performance. Gaining an understanding of the factors that govern the intensity of the BVI acoustic signature provides a strong analytical and experimental basis for enhanced rotor blade design.
ContributorsAhlf, Rick James (Author) / Dahm, Werner (Thesis director) / Wells, Valana (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
168345-Thumbnail Image.png
Description
Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in communication and the overall aesthetic experience of life, especially in

Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in communication and the overall aesthetic experience of life, especially in noisy or reverberant environments. This deficit occurs as hearing impairment reduces the signal strength needed for auditory spatial processing and changes how auditory information is combined with other sensory inputs (e.g., vision). The influence of multisensory processing on spatial awareness in listeners with normal, and impaired hearing is not assessed in clinical evaluations, and patients’ everyday sensory experiences are currently not directly measurable. This dissertation investigated the role of vision in auditory localization in listeners with normal, and impaired hearing in a naturalistic stimulus setting, using natural gaze orienting responses. Experiments examined two behavioral outcomes—response accuracy and response time—based on eye movement in response to simultaneously presented auditory and visual stimuli. The first set of experiments examined the effects of stimulus spatial saliency on response accuracy and response time and the extent of visual dominance in both metrics in auditory localization. The results indicate that vision can significantly influence both the speed and accuracy of auditory localization, especially when auditory stimuli are more ambiguous. The influence of vision is shown for both normal hearing- and hearing-impaired listeners. The second set of experiments examined the effect of frontal visual stimulation on localizing an auditory target presented from in front of or behind a listener. The results show domain-specific effects of visual capture on both response time and response accuracy. These results support previous findings that auditory-visual interactions are not limited by the spatial rule of proximity. These results further suggest the strong influence of vision on both the processing and the decision-making stages of sound source localization for both listeners with normal, and impaired hearing.
ContributorsClayton, Colton (Author) / Zhou, Yi (Thesis advisor) / Azuma, Tamiko (Committee member) / Daliri, Ayoub (Committee member) / Arizona State University (Publisher)
Created2021
131696-Thumbnail Image.png
Description
The following report provides details on the development of a protective enclosure and power system for an anti-poaching gunshot detection system to be implemented in Costa Rica. The development of a gunshot detection system is part of an ongoing project started by the Acoustic Ecology Lab at Arizona State University

The following report provides details on the development of a protective enclosure and power system for an anti-poaching gunshot detection system to be implemented in Costa Rica. The development of a gunshot detection system is part of an ongoing project started by the Acoustic Ecology Lab at Arizona State University in partnership with the Phoenix Zoo. As a whole, the project entails the development of a gunshot detection algorithm, wireless mesh alert system, device enclosure, and self-sustaining power system. For testing purposes, four devices, with different power system setups, were developed. Future developments are discussed and include further testing, more specialized mounting techniques, and the eventual expansion of the initial device network. This report presents the initial development of the protective enclosure and power system of the anti-poaching system that can be implemented in wildlife sanctuaries around the world.
ContributorsCarver, Cameron River (Author) / Paine, Dr. Garth (Thesis director) / Schipper, Dr. Jan (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165262-Thumbnail Image.png
Description
Instrumental music has been used to evoke natural environments and their qualities for centuries, and composers have employed a variety of methods in order to successfully invoke such sensations in their listeners. When composers and sound teams for video game soundtracks write pieces to accompany in-game settings, they may use

Instrumental music has been used to evoke natural environments and their qualities for centuries, and composers have employed a variety of methods in order to successfully invoke such sensations in their listeners. When composers and sound teams for video game soundtracks write pieces to accompany in-game settings, they may use a similar set of strategies. The nature of these tracks as an accompaniment to an interactive visual media and as a piece that must be able to indefinitely loop leads them to emphasize environment over emotion, and thus draws out or exaggerates these same techniques. This study seeks to understand the relationships between the acoustics of various setting backing tracks and the perceptual qualities of environments that listeners feel they evoke through the statistical method of multidimensional scaling. The relationships of three perceptual factors (coldness, brightness, wetness) and two acoustic factors (beats-per-minute, spectral envelope slope) are of greatest interest in this study.
ContributorsJackson, Jalen (Author) / Azuma, Tamiko (Thesis director) / Patten, Kristopher (Committee member) / Barrett, The Honors College (Contributor) / Speech & Hearing Science (Contributor)
Created2022-05