Matching Items (30)

135768-Thumbnail Image.png

Identification of the Origins of Blade-Vortex Interaction (BVI) Noise in Helicopters

Description

One of the leading concerns regarding the commercial and military applications of rotary wing powered vehicles is the issue of blade-vortex interaction (BVI) noise occurring during forward descent. This impulsive noise-generating phenomenon occurs due to the close proximity and interference

One of the leading concerns regarding the commercial and military applications of rotary wing powered vehicles is the issue of blade-vortex interaction (BVI) noise occurring during forward descent. This impulsive noise-generating phenomenon occurs due to the close proximity and interference between the main rotor blades and the wake vortices generated by the rotor blades from previous revolutions. Throughout the descent phase of a helicopter in forward flight, the rotating blades pass through these induced vortices, thus generating an impulsive "slap" noise that is characteristic of the common sound associated with helicopter flight among the general population. Therefore, parameterization of the variables of interest that affect BVI noise generation will allow for thorough analysis of the origins of the noise and open pathways for innovation that may offer significant improvements in acoustic performance. Gaining an understanding of the factors that govern the intensity of the BVI acoustic signature provides a strong analytical and experimental basis for enhanced rotor blade design.

Contributors

Agent

Created

Date Created
2016-05

149867-Thumbnail Image.png

Incorporating auditory models in speech/audio applications

Description

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception.

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

Contributors

Agent

Created

Date Created
2011

151722-Thumbnail Image.png

Re-sonification of objects, events, and environments

Description

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.

Contributors

Agent

Created

Date Created
2013

152361-Thumbnail Image.png

Techniques for soundscape retrieval and synthesis

Description

The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This

The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.

Contributors

Agent

Created

Date Created
2013

131696-Thumbnail Image.png

Acoustic Gunshot Detection Device Design and Power Management

Description

The following report provides details on the development of a protective enclosure and power system for an anti-poaching gunshot detection system to be implemented in Costa Rica. The development of a gunshot detection system is part of an ongoing project

The following report provides details on the development of a protective enclosure and power system for an anti-poaching gunshot detection system to be implemented in Costa Rica. The development of a gunshot detection system is part of an ongoing project started by the Acoustic Ecology Lab at Arizona State University in partnership with the Phoenix Zoo. As a whole, the project entails the development of a gunshot detection algorithm, wireless mesh alert system, device enclosure, and self-sustaining power system. For testing purposes, four devices, with different power system setups, were developed. Future developments are discussed and include further testing, more specialized mounting techniques, and the eventual expansion of the initial device network. This report presents the initial development of the protective enclosure and power system of the anti-poaching system that can be implemented in wildlife sanctuaries around the world.

Contributors

Agent

Created

Date Created
2020-05

153171-Thumbnail Image.png

Investigations of environmental effects on freeway acoustics

Description

The role of environmental factors that influence atmospheric propagation of sound originating from freeway noise sources is studied with a combination of field experiments and numerical simulations. Acoustic propagation models are developed and adapted for refractive index depending upon meteorological

The role of environmental factors that influence atmospheric propagation of sound originating from freeway noise sources is studied with a combination of field experiments and numerical simulations. Acoustic propagation models are developed and adapted for refractive index depending upon meteorological conditions. A high-resolution multi-nested environmental forecasting model forced by coarse global analysis is applied to predict real meteorological profiles at fine scales. These profiles are then used as input for the acoustic models. Numerical methods for producing higher resolution acoustic refractive index fields are proposed. These include spatial and temporal nested meteorological simulations with vertical grid refinement. It is shown that vertical nesting can improve the prediction of finer structures in near-ground temperature and velocity profiles, such as morning temperature inversions and low level jet-like features. Accurate representation of these features is shown to be important for modeling sound refraction phenomena and for enabling accurate noise assessment. Comparisons are made using the acoustic model for predictions with profiles derived from meteorological simulations and from field experiment observations in Phoenix, Arizona. The challenges faced in simulating accurate meteorological profiles at high resolution for sound propagation applications are highlighted and areas for possible improvement are discussed.

A detailed evaluation of the environmental forecast is conducted by investigating the Surface Energy Balance (SEB) obtained from observations made with an eddy-covariance flux tower compared with SEB from simulations using several physical parameterizations of urban effects and planetary boundary layer schemes. Diurnal variation in SEB constituent fluxes are examined in relation to surface layer stability and modeled diagnostic variables. Improvement is found when adapting parameterizations for Phoenix with reduced errors in the SEB components. Finer model resolution (to 333 m) is seen to have insignificant ($<1\sigma$) influence on mean absolute percent difference of 30-minute diurnal mean SEB terms. A new method of representing inhomogeneous urban development density derived from observations of impervious surfaces with sub-grid scale resolution is then proposed for mesoscale applications. This method was implemented and evaluated within the environmental modeling framework. Finally, a new semi-implicit scheme based on Leapfrog and a fourth-order implicit time-filter is developed.

Contributors

Agent

Created

Date Created
2014

153939-Thumbnail Image.png

Investigating compensatory mechanisms for sound localization: visual cue integration and the precedence effect

Description

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.

Contributors

Agent

Created

Date Created
2015

151005-Thumbnail Image.png

Development of acoustic sensor for flow rate monitoring

Description

The project is mainly aimed at detecting the gas flow rate in Biosensors and medical health applications by means of an acoustic method using whistle based device. Considering the challenges involved in maintaining particular flow rate and back pressure for

The project is mainly aimed at detecting the gas flow rate in Biosensors and medical health applications by means of an acoustic method using whistle based device. Considering the challenges involved in maintaining particular flow rate and back pressure for detecting certain analytes in breath analysis the proposed system along with a cell phone provides a suitable way to maintain the flow rate without any additional battery driven device. To achieve this, a system-level approach is implemented which involves development of a closed end whistle which is placed inside a tightly fitted constant back pressure tube. By means of experimentation pressure vs. flowrate curve is initially obtained and used for the development of the particular whistle. Finally, by means of an FFT code in a cell phone the flow rate vs. frequency characteristic curve is obtained. When a person respires through the device a whistle sound is generated which is captured by the cellphone microphone and a FFT analysis is performed to determine the frequency and hence the flow rate from the characteristic curve. This approach can be used to detect flow rate as low as low as 1L/min. The concept has been applied for the first time in this work to the development and optimization of a breath analyzer.

Contributors

Agent

Created

Date Created
2012

150496-Thumbnail Image.png

Degraded vowel acoustics and the perceptual consequences in dysarthria

Description

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.

Contributors

Agent

Created

Date Created
2012

153277-Thumbnail Image.png

Psychophysical and neural correlates of auditory attraction and aversion

Description

This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic

This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.

Contributors

Agent

Created

Date Created
2014