Matching Items (5)
Filtering by

Clear all filters

153418-Thumbnail Image.png
Description
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum

This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
ContributorsZhong, Xuan (Author) / Yost, William (Thesis advisor) / Zhou, Yi (Committee member) / Dorman, Michael (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
136487-Thumbnail Image.png
Description
Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative

Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative robots, robots which directly interact with patients, and the use of exoskeleton-based systems of rehabilitation. While there is great promise in both of these techniques, they currently lack sufficient efficacy to objectively justify their costs. The overall efficacy to both of these techniques is about the same as conventional therapy, yet each has higher overhead costs that conventional therapy does. However there are associated long-term cost savings in each case, meaning that the actual current viability of either of these techniques is somewhat nebulous. In both cases, the problems which decrease technique viability are largely related to joint action, the interaction between robot and human in completing specific tasks, and issues in robot adaptability that make joint action difficult. As such, the largest part of current research into rehabilitative robotics aims to make robots behave in more "human-like" manners or to bypass the joint action problem entirely.
ContributorsRamakrishna, Vijay Kambhampati (Author) / Helms Tillery, Stephen (Thesis director) / Buneo, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / W. P. Carey School of Business (Contributor) / School of Life Sciences (Contributor)
Created2015-05
137106-Thumbnail Image.png
Description
The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation

The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation speeds and two levels of rotation resistance were used to investigate tactile cues during knob rotation. In the future, this multidigit task can be generalized to similar rotational tasks, such as opening a bottle or turning a doorknob.
ContributorsChalla, Santhi Priya (Author) / Santos, Veronica (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Earth and Space Exploration (Contributor)
Created2014-05
Description
Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With

Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With the MindWave and an Arduino microcontroller processor, sonic output is attained by inputting the data collected by the MindWave, and in real time, outputting code that deciphers it into user constructed sound output. The input is scaled from values 0 to 100, measuring the ‘attentive’ state of the mind by observing alpha waves, and distributing this information to the microcontroller. The output of sound comes from sourcing this into the Musical Instrument Shield and varying the musical tonality with different chords and delay of the notes. The manipulation of alpha states highlights the control or lack thereof for the performer and touches on the question of how much control over the output there really is, much like the experimentalist Alvin Lucier displayed with his concepts in brainwave music.
ContributorsQuach, Andrew Duc (Author) / Helms Tillery, Stephen (Thesis director) / Feisst, Sabine (Committee member) / Barrett, The Honors College (Contributor) / Herberger Institute for Design and the Arts (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
151803-Thumbnail Image.png
Description
Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives

Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives a strong representation of these characteristics. Many previous studies have shown that the arm posture is a dominant factor for determining the end point impedance in a horizontal plane (transverse plane). The objective of this thesis is to characterize end point impedance of the human arm in the three dimensional (3D) space. Moreover, it investigates and models the control of the arm impedance due to increasing levels of muscle co-contraction. The characterization is done through experimental trials where human subjects maintained arm posture, while perturbed by a robot arm. Moreover, the subjects were asked to control the level of their arm muscles' co-contraction, using visual feedback of their muscles' activation, in order to investigate the effect of the muscle co-contraction on the arm impedance. The results of this study showed a very interesting, anisotropic increase of the arm stiffness due to muscle co-contraction. This can lead to very useful conclusions about the arm biomechanics as well as many implications for human motor control and more specifically the control of arm impedance through muscle co-contraction. The study finds implications for the EMG-based control of robots that physically interact with humans.
ContributorsPatel, Harshil Naresh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013