This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 3 of 3
Filtering by

Clear all filters

153488-Thumbnail Image.png
Description
Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents

Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents a set of computational methods, that generalize well across different conditions, for speech-based applications involving emotion recognition and keyword detection, and ambient sounds-based applications such as lifelogging.

The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to different conditions is strongly desired. A latent topic models-based method is proposed to learn supra-segmental features from low-level acoustic descriptors. The derived features outperform state-of-the-art approaches over multiple databases. Cross-corpus studies are conducted to determine the ability of these features to generalize well across different databases. The proposed method is also applied to derive features from facial expressions; a multi-modal fusion overcomes the deficiencies of a speech only approach and further improves the recognition performance.

Besides affecting the acoustic properties of speech, emotions have a strong influence over speech articulation kinematics. A learning approach, which constrains a classifier trained over acoustic descriptors, to also model articulatory data is proposed here. This method requires articulatory information only during the training stage, thus overcoming the challenges inherent to large-scale data collection, while simultaneously exploiting the correlations between articulation kinematics and acoustic descriptors to improve the accuracy of emotion recognition systems.

Identifying context from ambient sounds in a lifelogging scenario requires feature extraction, segmentation and annotation techniques capable of efficiently handling long duration audio recordings; a complete framework for such applications is presented. The performance is evaluated on real world data and accompanied by a prototypical Android-based user interface.

The proposed methods are also assessed in terms of computation and implementation complexity. Software and field programmable gate array based implementations are considered for emotion recognition, while virtual platforms are used to model the complexities of lifelogging. The derived metrics are used to determine the feasibility of these methods for applications requiring real-time capabilities and low power consumption.
ContributorsShah, Mohit (Author) / Spanias, Andreas (Thesis advisor) / Chakrabarti, Chaitali (Thesis advisor) / Berisha, Visar (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2015
155960-Thumbnail Image.png
Description
The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand

The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand use in everyday activities involving object manipulations, there is currently an incomplete understanding of the cortical sensorimotor mechanisms underlying this important behavior. One critical aspect of natural object grasping is the coordination of where the fingers make contact with an object and how much force is applied following contact. Such force-to-position modulation is critical for successful manipulation. However, the neural mechanisms underlying these motor processes remain less understood, as previous experiments have utilized protocols with fixed contact points which likely rely on different neural mechanisms from those involved in grasping at unconstrained contacts. To address this gap in the motor neuroscience field, transcranial magnetic stimulation (TMS) and electroencephalography (EEG) were used to investigate the role of primary motor cortex (M1), as well as other important cortical regions in the grasping network, during the planning and execution of object grasping and manipulation. The results of virtual lesions induced by TMS and EEG revealed grasp context-specific cortical mechanisms underlying digit force-to-position coordination, as well as the spatial and temporal dynamics of cortical activity during planning and execution. Together, the present findings provide the foundation for a novel framework accounting for how the central nervous system controls dexterous manipulation. This new knowledge can potentially benefit research in neuroprosthetics and improve the efficacy of neurorehabilitation techniques for patients affected by sensorimotor impairments.
ContributorsMcGurrin, Patrick M (Author) / Santello, Marco (Thesis advisor) / Helms-Tillery, Steve (Committee member) / Kleim, Jeff (Committee member) / Davare, Marco (Committee member) / Arizona State University (Publisher)
Created2017
155059-Thumbnail Image.png
Description
The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told

The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told to sustain an E, they will select which string to play it on, how much bow pressure and velocity to use, whether to use the entire bow or only the portion near the tip or the frog, how close to the bridge or fingerboard to contact the string, whether or not to use a mute, and so forth. Each one of these choices affects the resulting timbre, and navigating this timbre space is part of the art of playing the instrument. Nonetheless, this type of timbral nuance has been largely ignored in the design of musical robots. Therefore, this dissertation introduces a suite of techniques that deal with timbral nuance in musical robots. Chapter 1 provides the motivating ideas and introduces Kiki, a robot designed by the author to explore timbral nuance. Chapter 2 provides a long history of musical robots, establishing the under-researched nature of timbral nuance. Chapter 3 is a comprehensive treatment of dynamic timbre production in percussion robots and, using Kiki as a case-study, provides a variety of techniques for designing striking mechanisms that produce a range of timbres similar to those produced by human players. Chapter 4 introduces a machine-learning algorithm for recognizing timbres, so that a robot can transcribe timbres played by a human during live performance. Chapter 5 introduces a technique that allows a robot to learn how to produce isolated instances of particular timbres by listening to a human play an examples of those timbres. The 6th and final chapter introduces a method that allows a robot to learn the musical context of different timbres; this is done in realtime during interactive improvisation between a human and robot, wherein the robot builds a statistical model of which timbres the human plays in which contexts, and uses this to inform its own playing.
ContributorsKrzyzaniak, Michael Joseph (Author) / Coleman, Grisha (Thesis advisor) / Turaga, Pavan (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2016