Matching Items (16)
Filtering by

Clear all filters

151742-Thumbnail Image.png
Description
This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We

This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users.
Contributorsda Silva, Flavio J. K (Author) / Mcbeath, Michael K (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Presson, Clark (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152548-Thumbnail Image.png
Description
Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In

Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In this study, subjects lifted an L-shaped object with two fingers (2-DoF), and then lifted the object with three fingers (3-DoF). The subjects were divided into two groups--one group performed the task wearing a glove (to reduce tactile sensibility) upon the switch to 3-DoF (glove group), while the other group did not wear the glove (control group). Compensatory moment (torque) was used as a measure to determine how well the subject could minimize the tilt of the object following the switch from 2-DoF to 3-DoF. Upon the switch to 3-DoF, subjects wearing the glove generated a compensatory moment (Mcom) that had a significantly higher error than the average of the last five trials at the end of the 3-DoF block (p = 0.012), while the control subjects did not demonstrate a significant difference in Mcom. Additional effects of the reduction in tactile sensibility were: (1) the grip force for the group of subjects wearing the glove was significantly higher in the 3-DoF trials compared to the 2-DoF trials (p = 0.014), while the grip force of the control subjects was not significantly different; (2) the difference in centers of pressure between the thumb and fingers (ΔCoP) significantly increased in the 3-DoF block for the group of subjects wearing the glove, while the ΔCoP of the control subjects was not significantly different; (3) lastly, the control subjects demonstrated a greater increase in lift force than the group of subjects wearing the glove (though results were not significant). Combined together, these results suggest different force modulation strategies are used depending on the amount of tactile feedback that is available to the subject. Therefore, reduction of tactile sensibility has important effects on subjects' ability to transfer learned manipulation across different DoF contexts.
ContributorsGaw, Nathan (Author) / Helms Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2014
152881-Thumbnail Image.png
Description
Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense

Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense of digit position with motor commands responsible for generating digit forces and placement. However, the mechanisms underlying the phenomenon of digit position-force coordination are not well understood. This dissertation addresses this question through three experiments that are based on psychophysics and object lifting tasks. It was found in psychophysics tasks that sensed relative digit position was accurately reproduced when sensorimotor transformations occurred with larger vertical fingertip separations, within the same hand, and at the same hand posture. The results from a follow-up experiment conducted in the same digit position-matching task while generating forces in different directions reveal a biased relative digit position toward the direction of force production. Specifically, subjects reproduced the thumb CoP higher than the index finger CoP when vertical digit forces were directed upward and downward, respectively, and vice versa. It was also found in lifting tasks that the ability to discriminate the relative digit position prior to lifting an object and modulate digit forces to minimize object roll as a function of digit position are robust regardless of whether motor commands for positioning the digits on the object are involved. These results indicate that the erroneous sensorimotor transformations of relative digit position reported here must be compensated during dexterous manipulation by other mechanisms, e.g., visual feedback of fingertip position. Furthermore, predicted sensory consequences derived from the efference copy of voluntary motor commands to generate vertical digit forces may override haptic sensory feedback for the estimation of relative digit position. Lastly, the sensorimotor transformations from haptic feedback to digit force modulation to position appear to be facilitated by motor commands for active digit placement in manipulation.
ContributorsShibata, Daisuke (Author) / Santello, Marco (Thesis advisor) / Dounskaia, Natalia (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / McBeath, Michael (Committee member) / Arizona State University (Publisher)
Created2014
150222-Thumbnail Image.png
Description
An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space

An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space and the consequences of misestimation of limb position on movements. Two independent but related studies were performed. The first involved characterizing the neural mechanisms of limb position estimation in the non-human primate brain. Single unit recordings were obtained in area 5 of the posterior parietal cortex in order to examine the role of this area in estimating limb position based on visual and somatic signals (proprioceptive, efference copy). When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons were modulated by visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level. The second part of this dissertation focused on the consequences of misestimation of limb position for movement production. It is well known that limb movements are inherently variable. This variability could be the result of noise arising at one or more stages of movement production. Here we used biomechanical modeling and simulation techniques to characterize movement variability resulting from noise in estimating limb position ('sensing noise') and in planning required movement vectors ('planning noise'), and compared that to the variability expected due to noise in movement execution. We found that the effects of sensing and planning related noise on movement variability were dependent upon both the planned movement direction and the initial configuration of the arm and were different in many respects from the effects of execution noise.
ContributorsShi, Ying (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / He, Jiping (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2011
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
157380-Thumbnail Image.png
Description
A direct Magnetic Resonance (MR)-based neural activity mapping technique with high spatial and temporal resolution may accelerate studies of brain functional organization.

The most widely used technique for brain functional imaging is functional Magnetic Resonance Image (fMRI). The spatial resolution of fMRI is high. However, fMRI signals are highly influenced

A direct Magnetic Resonance (MR)-based neural activity mapping technique with high spatial and temporal resolution may accelerate studies of brain functional organization.

The most widely used technique for brain functional imaging is functional Magnetic Resonance Image (fMRI). The spatial resolution of fMRI is high. However, fMRI signals are highly influenced by the vasculature in each voxel and can be affected by capillary orientation and vessel size. Functional MRI analysis may, therefore, produce misleading results when voxels are nearby large vessels. Another problem in fMRI is that hemodynamic responses are slower than the neuronal activity. Therefore, temporal resolution is limited in fMRI. Furthermore, the correlation between neural activity and the hemodynamic response is not fully understood. fMRI can only be considered an indirect method of functional brain imaging.

Another MR-based method of functional brain mapping is neuronal current magnetic resonance imaging (ncMRI), which has been studied over several years. However, the amplitude of these neuronal current signals is an order of magnitude smaller than the physiological noise. Works on ncMRI include simulation, phantom experiments, and studies in tissue including isolated ganglia, optic nerves, and human brains. However, ncMRI development has been hampered due to the extremely small signal amplitude, as well as the presence of confounding signals from hemodynamic changes and other physiological noise.

Magnetic Resonance Electrical Impedance Tomography (MREIT) methods could have the potential for the detection of neuronal activity. In this technique, small external currents are applied to a body during MR scans. This current flow produces a magnetic field as well as an electric field. The altered magnetic flux density along the main magnetic field direction caused by this current flow can be obtained from phase images. When there is neural activity, the conductivity of the neural cell membrane changes and the current paths around the neurons change consequently. Neural spiking activity during external current injection, therefore, causes differential phase accumulation in MR data. Statistical analysis methods can be used to identify neuronal-current-induced magnetic field changes.
ContributorsFu, Fanrui (Author) / Sadleir, Rosalind (Thesis advisor) / Kodibagkar, Vikram (Committee member) / Kleim, Jeffrey (Committee member) / Muthuswamy, Jitendran (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2019
136952-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
136933-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05