Matching Items (5)
Filtering by

Clear all filters

153889-Thumbnail Image.png
Description
Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array

Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon action potential firing rates. The effect of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis, and Mutual Information Maximization was compared based on SVM classification performance. SVM classification was used to examine the functional parameters of (i) efficacy (ii) endurance to simulated failure and (iii) longevity of classification. The effect of using isolated-neuron and multi-unit firing rates was compared as the feature vector supplied to the SVM. The best classification performance was on post-implantation day 36, when using multi-unit firing rates the worst classification accuracy resulted from features selected with Wilcoxon signed-rank test (51.12 ± 0.65%) and the best classification accuracy resulted from Mutual Information Maximization (93.74 ± 0.32%). On this day when using single-unit firing rates, the classification accuracy from the Wilcoxon signed-rank test was 88.85 ± 0.61 % and Mutual Information Maximization was 95.60 ± 0.52% (degrees of freedom =10, level of chance =10%)
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
155059-Thumbnail Image.png
Description
The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told

The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told to sustain an E, they will select which string to play it on, how much bow pressure and velocity to use, whether to use the entire bow or only the portion near the tip or the frog, how close to the bridge or fingerboard to contact the string, whether or not to use a mute, and so forth. Each one of these choices affects the resulting timbre, and navigating this timbre space is part of the art of playing the instrument. Nonetheless, this type of timbral nuance has been largely ignored in the design of musical robots. Therefore, this dissertation introduces a suite of techniques that deal with timbral nuance in musical robots. Chapter 1 provides the motivating ideas and introduces Kiki, a robot designed by the author to explore timbral nuance. Chapter 2 provides a long history of musical robots, establishing the under-researched nature of timbral nuance. Chapter 3 is a comprehensive treatment of dynamic timbre production in percussion robots and, using Kiki as a case-study, provides a variety of techniques for designing striking mechanisms that produce a range of timbres similar to those produced by human players. Chapter 4 introduces a machine-learning algorithm for recognizing timbres, so that a robot can transcribe timbres played by a human during live performance. Chapter 5 introduces a technique that allows a robot to learn how to produce isolated instances of particular timbres by listening to a human play an examples of those timbres. The 6th and final chapter introduces a method that allows a robot to learn the musical context of different timbres; this is done in realtime during interactive improvisation between a human and robot, wherein the robot builds a statistical model of which timbres the human plays in which contexts, and uses this to inform its own playing.
ContributorsKrzyzaniak, Michael Joseph (Author) / Coleman, Grisha (Thesis advisor) / Turaga, Pavan (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2016
155473-Thumbnail Image.png
Description
In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central

In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Crook, Sharon (Committee member) / Arizona State University (Publisher)
Created2017
137772-Thumbnail Image.png
Description
As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.
ContributorsBevilacqua, Vincent Frank (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Trimble, Steven (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05
165924-Thumbnail Image.png
Description

The importance of nonverbal communication has been well established through several theories including Albert Mehrabian's 7-38-55 rule that proposes the respective importance of semantics, tonality and facial expressions in communication. Although several studies have examined how emotions are expressed and preceived in communication, there is limited research investigating the relationshi

The importance of nonverbal communication has been well established through several theories including Albert Mehrabian's 7-38-55 rule that proposes the respective importance of semantics, tonality and facial expressions in communication. Although several studies have examined how emotions are expressed and preceived in communication, there is limited research investigating the relationship between how emotions are expressed through semantics and facial expressions. Using a facial expression analysis software to deconstruct facial expressions into features and a K-Nearest-Neighbor (KNN) machine learning classifier, we explored if facial expressions can be clustered based on semantics. Our findings indicate that facial expressions can be clustered based on semantics and that there is an inherent congruence between facial expressions and semantics. These results are novel and significant in the context of nonverbal communication and are applicable to several areas of research including the vast field of emotion AI and machine emotional communication.

ContributorsEverett, Lauren (Author) / Coza, Aurel (Thesis director) / Santello, Marco (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2022-05