Matching Items (45)
Filtering by

Clear all filters

152071-Thumbnail Image.png
Description
The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into

The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into artificial hands in order to enhance grasp stability and reduce the cognitive burden on the user. To this end, three studies were conducted to understand how human hands respond, passively and actively, to unexpected perturbations of a grasped object along and about different axes relative to the hand. The first study investigated the effect of magnitude, direction, and axis of rotation on precision grip responses to unexpected rotational perturbations of a grasped object. A robust "catch-up response" (a rapid, pulse-like increase in grip force rate previously reported only for translational perturbations) was observed whose strength scaled with the axis of rotation. Using two haptic robots, we then investigated the effects of grip surface friction, axis, and direction of perturbation on precision grip responses for unexpected translational and rotational perturbations for three different hand-centric axes. A robust catch-up response was observed for all axes and directions for both translational and rotational perturbations. Grip surface friction had no effect on the stereotypical catch-up response. Finally, we characterized the passive properties of the precision grip-object system via robot-imposed impulse perturbations. The hand-centric axis associated with the greatest translational stiffness was different than that for rotational stiffness. This work expands our understanding of the passive and active features of precision grip, a hallmark of human dexterous manipulation. Biological insights such as these could be used to enhance the functionality of artificial hands and the quality of life for upper extremity amputees.
ContributorsDe Gregorio, Michael (Author) / Santos, Veronica J. (Thesis advisor) / Artemiadis, Panagiotis K. (Committee member) / Santello, Marco (Committee member) / Sugar, Thomas (Committee member) / Helms Tillery, Stephen I. (Committee member) / Arizona State University (Publisher)
Created2013
Description
Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly

Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly to the brain, providing feedback about features of objects in contact with the prosthetic. To achieve this goal, multiple simultaneous streams of information will need to be encoded by ICMS in a manner that produces robust, reliable, and discriminable sensations. The first segment of this work focuses on the discriminability of sensations elicited by ICMS within somatosensory cortex. Stimulation on multiple single electrodes and near-simultaneous stimulation across multiple electrodes, driven by a multimodal tactile sensor, were both used in these experiments. A SynTouch BioTac sensor was moved across a flat surface in several directions, and a subset of the sensor's electrode impedance channels were used to drive multichannel ICMS in the somatosensory cortex of a non-human primate. The animal performed a behavioral task during this stimulation to indicate the discriminability of sensations evoked by the electrical stimulation. The animal's responses to ICMS were somewhat inconsistent across experimental sessions but indicated that discriminable sensations were evoked by both single and multichannel ICMS. The factors that affect the discriminability of stimulation-induced sensations are not well understood, in part because the relationship between ICMS and the neural activity it induces is poorly defined. The second component of this work was to develop computational models that describe the populations of neurons likely to be activated by ICMS. Models of several neurons were constructed, and their responses to ICMS were calculated. A three-dimensional cortical model was constructed using these cell models and used to identify the populations of neurons likely to be recruited by ICMS. Stimulation activated neurons in a sparse and discontinuous fashion; additionally, the type, number, and location of neurons likely to be activated by stimulation varied with electrode depth.
ContributorsOverstreet, Cynthia K (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica (Committee member) / Buneo, Christopher (Committee member) / Otto, Kevin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152548-Thumbnail Image.png
Description
Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In

Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In this study, subjects lifted an L-shaped object with two fingers (2-DoF), and then lifted the object with three fingers (3-DoF). The subjects were divided into two groups--one group performed the task wearing a glove (to reduce tactile sensibility) upon the switch to 3-DoF (glove group), while the other group did not wear the glove (control group). Compensatory moment (torque) was used as a measure to determine how well the subject could minimize the tilt of the object following the switch from 2-DoF to 3-DoF. Upon the switch to 3-DoF, subjects wearing the glove generated a compensatory moment (Mcom) that had a significantly higher error than the average of the last five trials at the end of the 3-DoF block (p = 0.012), while the control subjects did not demonstrate a significant difference in Mcom. Additional effects of the reduction in tactile sensibility were: (1) the grip force for the group of subjects wearing the glove was significantly higher in the 3-DoF trials compared to the 2-DoF trials (p = 0.014), while the grip force of the control subjects was not significantly different; (2) the difference in centers of pressure between the thumb and fingers (ΔCoP) significantly increased in the 3-DoF block for the group of subjects wearing the glove, while the ΔCoP of the control subjects was not significantly different; (3) lastly, the control subjects demonstrated a greater increase in lift force than the group of subjects wearing the glove (though results were not significant). Combined together, these results suggest different force modulation strategies are used depending on the amount of tactile feedback that is available to the subject. Therefore, reduction of tactile sensibility has important effects on subjects' ability to transfer learned manipulation across different DoF contexts.
ContributorsGaw, Nathan (Author) / Helms Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2014
152687-Thumbnail Image.png
Description
Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not

Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not well understood. In this dissertation, such learning is analyzed by means of single unit neural recordings in the rats' motor agranular medial (AGm) and agranular lateral (AGl) while the rats learned to perform a directional choice task. Multichannel chronic recordings using implanted microelectrodes in the rat's brain were essential to this study. Also for fundamental scientific investigations in general and for some applications such as brain machine interface, the recorded neural waveforms need to be analyzed first to identify neural action potentials as basic computing units. Prior to analyzing and modeling the recorded neural signals, this dissertation proposes an advanced spike sorting system, the M-Sorter, to extract the action potentials from raw neural waveforms. The M-Sorter shows better or comparable performance compared with two other popular spike sorters under automatic mode. With the sorted action potentials in place, neuronal activity in the AGm and AGl areas in rats during learning of a directional choice task is examined. Systematic analyses suggest that rat's neural activity in AGm and AGl was modulated by previous trial outcomes during learning. Single unit based neural dynamics during task learning are described in detail in the dissertation. Furthermore, the differences in neural modulation between fast and slow learning rats were compared. The results show that the level of neural modulation of previous trial outcome is different in fast and slow learning rats which may in turn suggest an important role of previous trial outcome encoding in learning.
ContributorsYuan, Yu'an (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Chae, Junseok (Committee member) / Arizona State University (Publisher)
Created2014
152691-Thumbnail Image.png
Description
Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and

Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task.
ContributorsMao, Hongwei (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Cao, Yu (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
152881-Thumbnail Image.png
Description
Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense

Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense of digit position with motor commands responsible for generating digit forces and placement. However, the mechanisms underlying the phenomenon of digit position-force coordination are not well understood. This dissertation addresses this question through three experiments that are based on psychophysics and object lifting tasks. It was found in psychophysics tasks that sensed relative digit position was accurately reproduced when sensorimotor transformations occurred with larger vertical fingertip separations, within the same hand, and at the same hand posture. The results from a follow-up experiment conducted in the same digit position-matching task while generating forces in different directions reveal a biased relative digit position toward the direction of force production. Specifically, subjects reproduced the thumb CoP higher than the index finger CoP when vertical digit forces were directed upward and downward, respectively, and vice versa. It was also found in lifting tasks that the ability to discriminate the relative digit position prior to lifting an object and modulate digit forces to minimize object roll as a function of digit position are robust regardless of whether motor commands for positioning the digits on the object are involved. These results indicate that the erroneous sensorimotor transformations of relative digit position reported here must be compensated during dexterous manipulation by other mechanisms, e.g., visual feedback of fingertip position. Furthermore, predicted sensory consequences derived from the efference copy of voluntary motor commands to generate vertical digit forces may override haptic sensory feedback for the estimation of relative digit position. Lastly, the sensorimotor transformations from haptic feedback to digit force modulation to position appear to be facilitated by motor commands for active digit placement in manipulation.
ContributorsShibata, Daisuke (Author) / Santello, Marco (Thesis advisor) / Dounskaia, Natalia (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / McBeath, Michael (Committee member) / Arizona State University (Publisher)
Created2014
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015