Matching Items (35)
Filtering by

Clear all filters

151732-Thumbnail Image.png
Description
In order to successfully implement a neural prosthetic system, it is necessary to understand the control of limb movements and the representation of body position in the nervous system. As this development process continues, it is becoming increasingly important to understand the way multiple sensory modalities are used in limb

In order to successfully implement a neural prosthetic system, it is necessary to understand the control of limb movements and the representation of body position in the nervous system. As this development process continues, it is becoming increasingly important to understand the way multiple sensory modalities are used in limb representation. In a previous study, Shi et al. (2013) examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and held their arm at various target locations in a frontal plane. Visual feedback was withheld in half the trials, though non-visual (i.e. somatic) feedback was available in all trials. Previous analysis showed that some of the neurons were tuned to limb position and that some neurons had their response modulated by the presence or absence of visual feedback. This modulation manifested in decreases in firing rate variability in the vision condition as compared to nonvision. The decreases in firing rate variability, as shown through decreases in both the Fano factor of spike counts and the coefficient of variation of the inter-spike intervals, suggested that changes were taking place in both trial-by-trial and intra-trial variability. I sought to further probe the source of the change in intra-trial variability through spectral analysis. It was hypothesized that the presence of temporal structure in the vision condition would account for a regularity in firing that would have decreased intra-trial variability. While no peaks were apparent in the spectra, differences in spectral power between visual conditions were found. These differences are suggestive of unique temporal spiking patterns at the individual neuron level that may be influential at the population level.
ContributorsDyson, Keith (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen I (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152548-Thumbnail Image.png
Description
Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In

Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In this study, subjects lifted an L-shaped object with two fingers (2-DoF), and then lifted the object with three fingers (3-DoF). The subjects were divided into two groups--one group performed the task wearing a glove (to reduce tactile sensibility) upon the switch to 3-DoF (glove group), while the other group did not wear the glove (control group). Compensatory moment (torque) was used as a measure to determine how well the subject could minimize the tilt of the object following the switch from 2-DoF to 3-DoF. Upon the switch to 3-DoF, subjects wearing the glove generated a compensatory moment (Mcom) that had a significantly higher error than the average of the last five trials at the end of the 3-DoF block (p = 0.012), while the control subjects did not demonstrate a significant difference in Mcom. Additional effects of the reduction in tactile sensibility were: (1) the grip force for the group of subjects wearing the glove was significantly higher in the 3-DoF trials compared to the 2-DoF trials (p = 0.014), while the grip force of the control subjects was not significantly different; (2) the difference in centers of pressure between the thumb and fingers (ΔCoP) significantly increased in the 3-DoF block for the group of subjects wearing the glove, while the ΔCoP of the control subjects was not significantly different; (3) lastly, the control subjects demonstrated a greater increase in lift force than the group of subjects wearing the glove (though results were not significant). Combined together, these results suggest different force modulation strategies are used depending on the amount of tactile feedback that is available to the subject. Therefore, reduction of tactile sensibility has important effects on subjects' ability to transfer learned manipulation across different DoF contexts.
ContributorsGaw, Nathan (Author) / Helms Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2014
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
150297-Thumbnail Image.png
Description
Anticipatory planning of digit positions and forces is critical for successful dexterous object manipulation. Anticipatory (feedforward) planning bypasses the inherent delays in reflex responses and sensorimotor integration associated with reactive (feedback) control. It has been suggested that feedforward and feedback strategies can be distinguished based on the profile of gri

Anticipatory planning of digit positions and forces is critical for successful dexterous object manipulation. Anticipatory (feedforward) planning bypasses the inherent delays in reflex responses and sensorimotor integration associated with reactive (feedback) control. It has been suggested that feedforward and feedback strategies can be distinguished based on the profile of grip and load force rates during the period between initial contact with the object and object lift. However, this has not been validated in tasks that do not constrain digit placement. The purposes of this thesis were (1) to validate the hypothesis that force rate profiles are indicative of the control strategy used for object manipulation and (2) to test this hypothesis by comparing manipulation tasks performed with and without digit placement constraints. The first objective comprised two studies. In the first study an additional light or heavy mass was added to the base of the object. In the second study a mass was added, altering the object's center of mass (CM) location. In each experiment digit force rates were calculated between the times of initial digit contact and object lift. Digit force rates were fit to a Gaussian bell curve and the goodness of fit was compared across predictable and unpredictable mass and CM conditions. For both experiments, a predictable object mass and CM elicited bell shaped force rate profiles, indicative of feedforward control. For the second objective, a comparison of performance between subjects who performed the grasp task with either constrained or unconstrained digit contact locations was conducted. When digit location was unconstrained and CM was predictable, force rates were well fit to a bell shaped curve. However, the goodness of fit of the force rate profiles to the bell shaped curve was weaker for the constrained than the unconstrained digit placement condition. These findings seem to indicate that brain can generate an appropriate feedforward control strategy even when digit placement is unconstrained and an infinite combination of digit placement and force solutions exists to lift the object successfully. Future work is needed that investigates the role digit positioning and tactile feedback has on anticipatory control of object manipulation.
ContributorsCooperhouse, Michael A (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2011
150222-Thumbnail Image.png
Description
An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space

An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space and the consequences of misestimation of limb position on movements. Two independent but related studies were performed. The first involved characterizing the neural mechanisms of limb position estimation in the non-human primate brain. Single unit recordings were obtained in area 5 of the posterior parietal cortex in order to examine the role of this area in estimating limb position based on visual and somatic signals (proprioceptive, efference copy). When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons were modulated by visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level. The second part of this dissertation focused on the consequences of misestimation of limb position for movement production. It is well known that limb movements are inherently variable. This variability could be the result of noise arising at one or more stages of movement production. Here we used biomechanical modeling and simulation techniques to characterize movement variability resulting from noise in estimating limb position ('sensing noise') and in planning required movement vectors ('planning noise'), and compared that to the variability expected due to noise in movement execution. We found that the effects of sensing and planning related noise on movement variability were dependent upon both the planned movement direction and the initial configuration of the arm and were different in many respects from the effects of execution noise.
ContributorsShi, Ying (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / He, Jiping (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2011
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
151088-Thumbnail Image.png
Description
Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with

Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with improved ankle kinematics and kinetics. The ankle allows a person to walk normally and up and down stairs, but volitional control is still an issue. This research tackled the problem of giving the user more control over the prosthetic ankle using a force/torque circuit. When the user presses against a force/torque sensor located inside the socket the prosthetic foot plantar flexes or moves downward. This will help the user add additional push-off force when walking up slopes or stairs. It also gives the user a sense of control over the device.
ContributorsFronczyk, Adam (Author) / Sugar, Thomas G. (Thesis advisor) / Helms-Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012