Matching Items (11)
Filtering by

Clear all filters

152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
150222-Thumbnail Image.png
Description
An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space

An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space and the consequences of misestimation of limb position on movements. Two independent but related studies were performed. The first involved characterizing the neural mechanisms of limb position estimation in the non-human primate brain. Single unit recordings were obtained in area 5 of the posterior parietal cortex in order to examine the role of this area in estimating limb position based on visual and somatic signals (proprioceptive, efference copy). When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons were modulated by visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level. The second part of this dissertation focused on the consequences of misestimation of limb position for movement production. It is well known that limb movements are inherently variable. This variability could be the result of noise arising at one or more stages of movement production. Here we used biomechanical modeling and simulation techniques to characterize movement variability resulting from noise in estimating limb position ('sensing noise') and in planning required movement vectors ('planning noise'), and compared that to the variability expected due to noise in movement execution. We found that the effects of sensing and planning related noise on movement variability were dependent upon both the planned movement direction and the initial configuration of the arm and were different in many respects from the effects of execution noise.
ContributorsShi, Ying (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / He, Jiping (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2011
150144-Thumbnail Image.png
Description
In the past decade, research on the motor control side of neuroprosthetics has steadily gained momentum. However, modern research in prosthetic development supplements a focus on motor control with a concentration on sensory feedback. Simulating sensation is a central issue because without sensory capabilities, the sophistication of the most advanced

In the past decade, research on the motor control side of neuroprosthetics has steadily gained momentum. However, modern research in prosthetic development supplements a focus on motor control with a concentration on sensory feedback. Simulating sensation is a central issue because without sensory capabilities, the sophistication of the most advanced motor control system fails to reach its full potential. This research is an effort toward the development of sensory feedback specifically for neuroprosthetic hands. The present aim of this work is to understand the processing and representation of cutaneous sensation by evaluating performance and neural activity in somatosensory cortex (SI) during a grasp task. A non-human primate (Macaca mulatta) was trained to reach out and grasp textured instrumented objects with a precision grip. Two different textures for the objects were used, 100% cotton cloth and 60-grade sandpaper, and the target object was presented at two different orientations. Of the 167 cells that were isolated for this experiment, only 42 were recorded while the subject executed a few blocks of successful trials for both textures. These latter cells were used in this study's statistical analysis. Of these, 37 units (88%) exhibited statistically significant task related activity. Twenty-two units (52%) exhibited statistically significant tuning to texture, and 16 units (38%) exhibited statistically significant tuning to posture. Ten of the cells (24%) exhibited statistically significant tuning to both texture and posture. These data suggest that single units in somatosensory cortex can encode multiple phenomena such as texture and posture. However, if this information is to be used to provide sensory feedback for a prosthesis, scientists must learn to further parse cortical activity to discover how to induce specific modalities of sensation. Future experiments should therefore be developed that probe more variables and that more systematically and comprehensively scan somatosensory cortex. This will allow researchers to seek out the existence or non-existence of cortical pockets reserved for certain modalities of sensation, which will be valuable in learning how to later provide appropriate sensory feedback for a prosthesis through cortical stimulation.
ContributorsNaufel, Stephanie (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica J (Thesis advisor) / Buneo, Christopher A (Committee member) / Robert, Jason S (Committee member) / Arizona State University (Publisher)
Created2011
153889-Thumbnail Image.png
Description
Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array

Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon action potential firing rates. The effect of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis, and Mutual Information Maximization was compared based on SVM classification performance. SVM classification was used to examine the functional parameters of (i) efficacy (ii) endurance to simulated failure and (iii) longevity of classification. The effect of using isolated-neuron and multi-unit firing rates was compared as the feature vector supplied to the SVM. The best classification performance was on post-implantation day 36, when using multi-unit firing rates the worst classification accuracy resulted from features selected with Wilcoxon signed-rank test (51.12 ± 0.65%) and the best classification accuracy resulted from Mutual Information Maximization (93.74 ± 0.32%). On this day when using single-unit firing rates, the classification accuracy from the Wilcoxon signed-rank test was 88.85 ± 0.61 % and Mutual Information Maximization was 95.60 ± 0.52% (degrees of freedom =10, level of chance =10%)
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
136361-Thumbnail Image.png
Description
Determining the characteristics of an object during a grasping task requires a combination of mechanoreceptors in the muscles and fingertips. The width of a person's finger aperture during the grasp may affect the accuracy of how that person determines hardness, as well. These experiments aim to investigate how an individual

Determining the characteristics of an object during a grasping task requires a combination of mechanoreceptors in the muscles and fingertips. The width of a person's finger aperture during the grasp may affect the accuracy of how that person determines hardness, as well. These experiments aim to investigate how an individual perceives hardness amongst a gradient of varying hardness levels. The trend in the responses is assumed to follow a general psychometric function. This will provide information about subjects' abilities to differentiate between two largely different objects, and their tendencies towards guess-chances upon the presentation of two similar objects. After obtaining this data, it is then important to additionally test varying finger apertures in an object-grasping task. This will allow an insight into the effect of aperture on the obtained psychometric function, thus ultimately providing information about tactile and haptic feedback for further application in neuroprosthetic devices. Three separate experiments were performed in order to test the effect of finger aperture on object hardness differentiation. The first experiment tested a one-finger pressing motion among a hardness gradient of ballistic gelatin cubes. Subjects were asked to compare the hardness of one cube to another, which produced the S-curve that accurately portrayed the psychometric function. The second experiment utilized the Phantom haptic device in a similar setup, using the precision grip grasping motion, instead. This showed a more linear curve; the percentage reported harder increased as the hardness of the second presented cube increased, which was attributed to both the experimental setup limitations and the scale of the general hardness gradient. The third experiment then progressed to test the effect of three finger apertures in the same experimental setup. By providing three separate testing scenarios in the precision grip task, the experiment demonstrated that the level of finger aperture has no significant effect on an individual's ability to perceive hardness.
ContributorsMaestas, Gabrielle Elise (Author) / Helms Tillery, Stephen (Thesis director) / Tanner, Justin (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
136952-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
136933-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
154148-Thumbnail Image.png
Description
Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this

Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this control. In short, learning became a key component in controlling these systems. As a result, BMIs have become ideal tools to probe and explore brain activity, since they allow the isolation of neural inputs and systematic altering of the relationships between the neural signals and output. I have used BMIs to explore the process of brain adaptability in a motor-like task. To this end, I trained non-human primates to control a 3D cursor and adapt to two different perturbations: a visuomotor rotation, uniform across the neural ensemble, and a decorrelation task, which non-uniformly altered the relationship between the activity of particular neurons in an ensemble and movement output. I measured individual and population level changes in the neural ensemble as subjects honed their skills over the span of several days. I found some similarities in the adaptation process elicited by these two tasks. On one hand, individual neurons displayed tuning changes across the entire ensemble after task adaptation: most neurons displayed transient changes in their preferred directions, and most neuron pairs showed changes in their cross-correlations during the learning process. On the other hand, I also measured population level adaptation in the neural ensemble: the underlying neural manifolds that control these neural signals also had dynamic changes during adaptation. I have found that the neural circuits seem to apply an exploratory strategy when adapting to new tasks. Our results suggest that information and trajectories in the neural space increase after initially introducing the perturbations, and before the subject settles into workable solutions. These results provide new insights into both the underlying population level processes in motor learning, and the changes in neural coding which are necessary for subjects to learn to control neuroprosthetics. Understanding of these mechanisms can help us create better control algorithms, and design training paradigms that will take advantage of these processes.
ContributorsArmenta Salas, Michelle (Author) / Helms Tillery, Stephen I (Thesis advisor) / Si, Jennie (Committee member) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2015
151803-Thumbnail Image.png
Description
Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives

Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives a strong representation of these characteristics. Many previous studies have shown that the arm posture is a dominant factor for determining the end point impedance in a horizontal plane (transverse plane). The objective of this thesis is to characterize end point impedance of the human arm in the three dimensional (3D) space. Moreover, it investigates and models the control of the arm impedance due to increasing levels of muscle co-contraction. The characterization is done through experimental trials where human subjects maintained arm posture, while perturbed by a robot arm. Moreover, the subjects were asked to control the level of their arm muscles' co-contraction, using visual feedback of their muscles' activation, in order to investigate the effect of the muscle co-contraction on the arm impedance. The results of this study showed a very interesting, anisotropic increase of the arm stiffness due to muscle co-contraction. This can lead to very useful conclusions about the arm biomechanics as well as many implications for human motor control and more specifically the control of arm impedance through muscle co-contraction. The study finds implications for the EMG-based control of robots that physically interact with humans.
ContributorsPatel, Harshil Naresh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
Description
The human hand relies on information from surrounding environment to distinguish objects based on qualities like size, texture, weight, and compliance. The size of an object can be determined from tactile feedback, proprioception, and visual feedback. This experiment aims to determine the accuracy of size discrimination in physical and virtual

The human hand relies on information from surrounding environment to distinguish objects based on qualities like size, texture, weight, and compliance. The size of an object can be determined from tactile feedback, proprioception, and visual feedback. This experiment aims to determine the accuracy of size discrimination in physical and virtual objects using proprioceptive and tactile feedback. Using both senses will help determine how much proprioceptive and tactile feedback plays a part in discriminating small size variations and whether replacing a missing sensation will increase the subject's accuracy. Ultimately, determining the specific contributions of tactile and proprioceptive feedback mechanisms during object manipulation is important in order to give prosthetic hand users the ability of stereognosis among other manipulation tasks. Two different experiments using physical and virtual objects were required to discover the roles of tactile and proprioceptive feedback. Subjects were asked to compare the size of one block to a previous object. The blocks increased in size by two millimeter increments and were randomized in order to determine whether subjects could correctly identify if a box was smaller, larger, or the same size as the previous box. In the proprioceptive experiment subjects had two sub-sets of experiments each with a different non-tactile cue. The experiment demonstrated that subjects performed better with physical objects compared to virtual objects. This suggests that size discrimination is possible in the absence of tactile feedback, but tactile input is necessary for accuracy in small size discrimination.
ContributorsFrear, Darcy Lynn (Author) / Helms Tillery, Stephen (Thesis director) / Buneo, Christopher (Committee member) / Overstreet, Cynthia (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2013-05