This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 2 of 2
Filtering by

Clear all filters

153488-Thumbnail Image.png
Description
Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents

Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents a set of computational methods, that generalize well across different conditions, for speech-based applications involving emotion recognition and keyword detection, and ambient sounds-based applications such as lifelogging.

The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to different conditions is strongly desired. A latent topic models-based method is proposed to learn supra-segmental features from low-level acoustic descriptors. The derived features outperform state-of-the-art approaches over multiple databases. Cross-corpus studies are conducted to determine the ability of these features to generalize well across different databases. The proposed method is also applied to derive features from facial expressions; a multi-modal fusion overcomes the deficiencies of a speech only approach and further improves the recognition performance.

Besides affecting the acoustic properties of speech, emotions have a strong influence over speech articulation kinematics. A learning approach, which constrains a classifier trained over acoustic descriptors, to also model articulatory data is proposed here. This method requires articulatory information only during the training stage, thus overcoming the challenges inherent to large-scale data collection, while simultaneously exploiting the correlations between articulation kinematics and acoustic descriptors to improve the accuracy of emotion recognition systems.

Identifying context from ambient sounds in a lifelogging scenario requires feature extraction, segmentation and annotation techniques capable of efficiently handling long duration audio recordings; a complete framework for such applications is presented. The performance is evaluated on real world data and accompanied by a prototypical Android-based user interface.

The proposed methods are also assessed in terms of computation and implementation complexity. Software and field programmable gate array based implementations are considered for emotion recognition, while virtual platforms are used to model the complexities of lifelogging. The derived metrics are used to determine the feasibility of these methods for applications requiring real-time capabilities and low power consumption.
ContributorsShah, Mohit (Author) / Spanias, Andreas (Thesis advisor) / Chakrabarti, Chaitali (Thesis advisor) / Berisha, Visar (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2015
Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014