Matching Items (3,414)
Filtering by

Clear all filters

152198-Thumbnail Image.png
Description
The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters

The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters from human auditory models, such as auditory patterns and loudness, involves computationally intensive operations which can strain device resources. Hence, strategies for implementing computationally efficient human auditory models for loudness estimation have been studied in this thesis. Existing algorithms for reducing computations in auditory pattern and loudness estimation have been examined and improved algorithms have been proposed to overcome limitations of these methods. In addition, real-time applications such as perceptual loudness estimation and loudness equalization using auditory models have also been implemented. A software implementation of loudness estimation on iOS devices is also reported in this thesis. In addition to the loudness estimation algorithms and software, in this thesis project we also created new illustrations of speech and audio processing concepts for research and education. As a result, a new suite of speech/audio DSP functions was developed and integrated as part of the award-winning educational iOS App 'iJDSP." These functions are described in detail in this thesis. Several enhancements in the architecture of the application have also been introduced for providing the supporting framework for speech/audio processing. Frame-by-frame processing and visualization functionalities have been developed to facilitate speech/audio processing. In addition, facilities for easy sound recording, processing and audio rendering have also been developed to provide students, practitioners and researchers with an enriched DSP simulation tool. Simulations and assessments have been also developed for use in classes and training of practitioners and students.
ContributorsKalyanasundaram, Girish (Author) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2013
152594-Thumbnail Image.png
Description
The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits

The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits that are easily identified or tracked. Indeed it has been shown that patients with enduring symptoms have difficulty describing their problems; therefore, there is an urgent need for a sensitive measure of brain activity that corresponds with higher order cognitive processing. The development of a neurophysiological metric that maps to clinical resolution would inform decisions about diagnosis and prognosis, including the need for clinical intervention to address cognitive deficits. The literature suggests the need for assessment of concussion under cognitively demanding tasks. Here, a joint behavioral- high-density electroencephalography (EEG) paradigm was employed. This allows for the examination of cortical activity patterns during speech comprehension at various levels of degradation in a sentence verification task, imposing the need for higher-order cognitive processes. Eight participants with concussion listened to true-false sentences produced with either moderately to highly intelligible noise-vocoders. Behavioral data were simultaneously collected. The analysis of cortical activation patterns included 1) the examination of event-related potentials, including latency and source localization, and 2) measures of frequency spectra and associated power. Individual performance patterns were assessed during acute injury and a return visit several months following injury. Results demonstrate a combination of task-related electrophysiology measures correspond to changes in task performance during the course of recovery. Further, a discriminant function analysis suggests EEG measures are more sensitive than behavioral measures in distinguishing between individuals with concussion and healthy controls at both injury and recovery, suggesting the robustness of neurophysiological measures during a cognitively demanding task to both injury and persisting pathophysiology.
ContributorsUtianski, Rene (Author) / Liss, Julie M (Thesis advisor) / Berisha, Visar (Committee member) / Caviness, John N (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2014
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
152907-Thumbnail Image.png
Description
The problem of cooperative radar and communications signaling is investigated. Each system typically considers the other system a source of interference. Consequently, the tradition is to have them operate in orthogonal frequency bands. By considering the radar and communications operations to be a single joint system, performance bounds on a

The problem of cooperative radar and communications signaling is investigated. Each system typically considers the other system a source of interference. Consequently, the tradition is to have them operate in orthogonal frequency bands. By considering the radar and communications operations to be a single joint system, performance bounds on a receiver that observes communications and radar return in the same frequency allocation are derived. Bounds in performance of the joint system is measured in terms of data information rate for communications and radar estimation information rate for the radar. Inner bounds on performance are constructed.
ContributorsChiriyath, Alex (Author) / Bliss, Daniel W (Thesis advisor) / Kosut, Oliver (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014
152941-Thumbnail Image.png
Description
Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical

Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys.

Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection.

The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per second. In comparison, the natural alert signal - door opening and closing - evoked the faster head turns than other stimulus conditions. These results suggest that behaviorally relevant stimulus such as alert signals evoke faster head-turn responses in marmoset monkeys.
ContributorsSimhadri, Sravanthi (Author) / Zhou, Yi (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014
152886-Thumbnail Image.png
Description
As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on

As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on industry established expectations of power consumption and mobility. Current methods of distributing the spectrum among all participants are expected to not cope with the demand in a very near future. In this thesis, the effect of employing sophisticated multiple-input, multiple-output (MIMO) systems in this regard is explored. The efficacy of systems which can make intelligent decisions on the transmission mode usage and power allocation to these modes becomes relevant in the current scenario, where the need for performance far exceeds the cost expendable on hardware. The effect of adding multiple antennas at either ends will be examined, the capacity of such systems and of networks comprised of many such participants will be evaluated. Methods of simulating said networks, and ways to achieve better performance by making intelligent transmission decisions will be proposed. Finally, a way of access control closer to the physical layer (a 'statistical MAC') and a possible metric to be used for such a MAC is suggested.
ContributorsThontadarya, Niranjan (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2014
153488-Thumbnail Image.png
Description
Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents

Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents a set of computational methods, that generalize well across different conditions, for speech-based applications involving emotion recognition and keyword detection, and ambient sounds-based applications such as lifelogging.

The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to different conditions is strongly desired. A latent topic models-based method is proposed to learn supra-segmental features from low-level acoustic descriptors. The derived features outperform state-of-the-art approaches over multiple databases. Cross-corpus studies are conducted to determine the ability of these features to generalize well across different databases. The proposed method is also applied to derive features from facial expressions; a multi-modal fusion overcomes the deficiencies of a speech only approach and further improves the recognition performance.

Besides affecting the acoustic properties of speech, emotions have a strong influence over speech articulation kinematics. A learning approach, which constrains a classifier trained over acoustic descriptors, to also model articulatory data is proposed here. This method requires articulatory information only during the training stage, thus overcoming the challenges inherent to large-scale data collection, while simultaneously exploiting the correlations between articulation kinematics and acoustic descriptors to improve the accuracy of emotion recognition systems.

Identifying context from ambient sounds in a lifelogging scenario requires feature extraction, segmentation and annotation techniques capable of efficiently handling long duration audio recordings; a complete framework for such applications is presented. The performance is evaluated on real world data and accompanied by a prototypical Android-based user interface.

The proposed methods are also assessed in terms of computation and implementation complexity. Software and field programmable gate array based implementations are considered for emotion recognition, while virtual platforms are used to model the complexities of lifelogging. The derived metrics are used to determine the feasibility of these methods for applications requiring real-time capabilities and low power consumption.
ContributorsShah, Mohit (Author) / Spanias, Andreas (Thesis advisor) / Chakrabarti, Chaitali (Thesis advisor) / Berisha, Visar (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2015
153453-Thumbnail Image.png
Description
The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I

The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I were to (i) compare the lip reading difficulty of the AzAV sentences to that of other sentence materials, (ii) compare the speech-reading ability of CI listeners to that of normal-hearing listeners and (iii) assess the gain in speech understanding when listeners have both auditory and visual information from easy-to-lip-read and difficult-to-lip read sentences. In addition, the sentence lists were subjected to a multi-level text analysis to determine the factors that make sentences easy or difficult to speech read.

The results of Experiment I showed that (i) the AzAV sentences were relatively difficult to lip read, (ii) that CI listeners and normal-hearing listeners did not differ in lip reading ability and (iii) that sentences with low lip-reading intelligibility (10-15 % correct) provide about a 30 percentage point improvement in speech understanding when added to the acoustic stimulus, while sentences with high lip-reading intelligibility (30-60 % correct) provide about a 50 percentage point improvement in the same comparison. The multi-level text analyses showed that the familiarity of phrases in the sentences was the primary driving factor that affects the lip reading difficulty.

The aim of Experiment II was to investigate the value, when visual information is present, of bimodal hearing and bilateral cochlear implants. The results of Experiment II showed that when visual information is present, low-frequency acoustic hearing can be of value to speech understanding for patients fit with a single CI. However, when visual information was available no gain was seen from the provision of a second CI, i.e., bilateral CIs. As was the case in Experiment I, visual information provided about a 30 percentage point improvement in speech understanding.
ContributorsWang, Shuai (Author) / Dorman, Michael (Thesis advisor) / Berisha, Visar (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
153310-Thumbnail Image.png
Description
This work considers the problem of multiple detection and tracking in two complex time-varying environments, urban terrain and underwater. Tracking multiple radar targets in urban environments is rst investigated by exploiting multipath signal returns, wideband underwater acoustic (UWA) communications channels are estimated using adaptive learning methods, and multiple UWA communications

This work considers the problem of multiple detection and tracking in two complex time-varying environments, urban terrain and underwater. Tracking multiple radar targets in urban environments is rst investigated by exploiting multipath signal returns, wideband underwater acoustic (UWA) communications channels are estimated using adaptive learning methods, and multiple UWA communications users are detected by designing the transmit signal to match the environment. For the urban environment, a multi-target tracking algorithm is proposed that integrates multipath-to-measurement association and the probability hypothesis density method implemented using particle filtering. The algorithm is designed to track an unknown time-varying number of targets by extracting information from multiple measurements due to multipath returns in the urban terrain. The path likelihood probability is calculated by considering associations between measurements and multipath returns, and an adaptive clustering algorithm is used to estimate the number of target and their corresponding parameters. The performance of the proposed algorithm is demonstrated for different multiple target scenarios and evaluated using the optimal subpattern assignment metric. The underwater environment provides a very challenging communication channel due to its highly time-varying nature, resulting in large distortions due to multipath and Doppler-scaling, and frequency-dependent path loss. A model-based wideband UWA channel estimation algorithm is first proposed to estimate the channel support and the wideband spreading function coefficients. A nonlinear frequency modulated signaling scheme is proposed that is matched to the wideband characteristics of the underwater environment. Constraints on the signal parameters are derived to optimally reduce multiple access interference and the UWA channel effects. The signaling scheme is compared to a code division multiple access (CDMA) scheme to demonstrate its improved bit error rate performance. The overall multi-user communication system performance is finally analyzed by first estimating the UWA channel and then designing the signaling scheme for multiple communications users.
ContributorsZhou, Meng (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014