Matching Items (12)
Filtering by

Clear all filters

152687-Thumbnail Image.png
Description
Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not

Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not well understood. In this dissertation, such learning is analyzed by means of single unit neural recordings in the rats' motor agranular medial (AGm) and agranular lateral (AGl) while the rats learned to perform a directional choice task. Multichannel chronic recordings using implanted microelectrodes in the rat's brain were essential to this study. Also for fundamental scientific investigations in general and for some applications such as brain machine interface, the recorded neural waveforms need to be analyzed first to identify neural action potentials as basic computing units. Prior to analyzing and modeling the recorded neural signals, this dissertation proposes an advanced spike sorting system, the M-Sorter, to extract the action potentials from raw neural waveforms. The M-Sorter shows better or comparable performance compared with two other popular spike sorters under automatic mode. With the sorted action potentials in place, neuronal activity in the AGm and AGl areas in rats during learning of a directional choice task is examined. Systematic analyses suggest that rat's neural activity in AGm and AGl was modulated by previous trial outcomes during learning. Single unit based neural dynamics during task learning are described in detail in the dissertation. Furthermore, the differences in neural modulation between fast and slow learning rats were compared. The results show that the level of neural modulation of previous trial outcome is different in fast and slow learning rats which may in turn suggest an important role of previous trial outcome encoding in learning.
ContributorsYuan, Yu'an (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Chae, Junseok (Committee member) / Arizona State University (Publisher)
Created2014
152691-Thumbnail Image.png
Description
Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and

Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task.
ContributorsMao, Hongwei (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Cao, Yu (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
152881-Thumbnail Image.png
Description
Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense

Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense of digit position with motor commands responsible for generating digit forces and placement. However, the mechanisms underlying the phenomenon of digit position-force coordination are not well understood. This dissertation addresses this question through three experiments that are based on psychophysics and object lifting tasks. It was found in psychophysics tasks that sensed relative digit position was accurately reproduced when sensorimotor transformations occurred with larger vertical fingertip separations, within the same hand, and at the same hand posture. The results from a follow-up experiment conducted in the same digit position-matching task while generating forces in different directions reveal a biased relative digit position toward the direction of force production. Specifically, subjects reproduced the thumb CoP higher than the index finger CoP when vertical digit forces were directed upward and downward, respectively, and vice versa. It was also found in lifting tasks that the ability to discriminate the relative digit position prior to lifting an object and modulate digit forces to minimize object roll as a function of digit position are robust regardless of whether motor commands for positioning the digits on the object are involved. These results indicate that the erroneous sensorimotor transformations of relative digit position reported here must be compensated during dexterous manipulation by other mechanisms, e.g., visual feedback of fingertip position. Furthermore, predicted sensory consequences derived from the efference copy of voluntary motor commands to generate vertical digit forces may override haptic sensory feedback for the estimation of relative digit position. Lastly, the sensorimotor transformations from haptic feedback to digit force modulation to position appear to be facilitated by motor commands for active digit placement in manipulation.
ContributorsShibata, Daisuke (Author) / Santello, Marco (Thesis advisor) / Dounskaia, Natalia (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / McBeath, Michael (Committee member) / Arizona State University (Publisher)
Created2014
137282-Thumbnail Image.png
Description
A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a

A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a different context (Fu et al, 2012).
The purpose of this study is to know whether the primary motor cortex (M1) plays a role in the sensorimotor memory. It was hypothesized that temporary disruption of the M1 following the learning to minimize a tilt using a ‘L’ shaped object would negatively affect the retention of sensorimotor memory and thus reduce interference between the memory acquired in one context and the visual cues to perform the same task in a different context.
Significant findings were shown in blocks 1, 2, and 4. In block 3, subjects displayed insignificant amount of learning. However, it cannot be concluded that there is full interference in block 3. Therefore, looked into 3 effects in statistical analysis: the main effects of the blocks, the main effects of the trials, and the effects of the blocks and trials combined. From the block effects, there is a p-value of 0.001, and from the trial effects, the p-value is less than 0.001. Both of these effects indicate that there is learning occurring. However, when looking at the blocks * trials effects, we see a p-value of 0.002 < 0.05 indicating significant interaction between sensorimotor memories. Based on the results that were found, there is a presence of interference in all the blocks but not enough to justify the use of TMS in order to reduce interference because there is a partial reduction of interference from the control experiment. It is evident that the time delay might be the issue between context switches. By reducing the time delay between block 2 and 3 from 10 minutes to 5 minutes, I will hope to see significant learning to occur from the first trial to the second trial.
ContributorsHasan, Salman Bashir (Author) / Santello, Marco (Thesis director) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / W. P. Carey School of Business (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
136952-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
137106-Thumbnail Image.png
Description
The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation

The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation speeds and two levels of rotation resistance were used to investigate tactile cues during knob rotation. In the future, this multidigit task can be generalized to similar rotational tasks, such as opening a bottle or turning a doorknob.
ContributorsChalla, Santhi Priya (Author) / Santos, Veronica (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Earth and Space Exploration (Contributor)
Created2014-05
136933-Thumbnail Image.png
Description
Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has

Motor behavior is prone to variable conditions and deviates further in disorders affecting the nervous system. A combination of environmental and neural factors impacts the amount of uncertainty. Although the influence of these factors on estimating endpoint positions have been examined, the role of limb configuration on endpoint variability has been mostly ignored. Characterizing the influence of arm configuration (i.e. intrinsic factors) would allow greater comprehension of sensorimotor integration and assist in interpreting exaggerated movement variability in patients. In this study, subjects were placed in a 3-D virtual reality environment and were asked to move from a starting position to one of three targets in the frontal plane with and without visual feedback of the moving limb. The alternating of visual feedback during trials increased uncertainty between the planning and execution phases. The starting limb configurations, adducted and abducted, were varied in separate blocks. Arm configurations were setup by rotating along the shoulder-hand axis to maintain endpoint position. The investigation hypothesized: 1) patterns of endpoint variability of movements would be dependent upon the starting arm configuration and 2) any differences observed would be more apparent in conditions that withheld visual feedback. The results indicated that there were differences in endpoint variability between arm configurations in both visual conditions, but differences in variability increased when visual feedback was withheld. Overall this suggests that in the presence of visual feedback, planning of movements in 3D space mostly uses coordinates that are arm configuration independent. On the other hand, without visual feedback, planning of movements in 3D space relies substantially on intrinsic coordinates.
ContributorsRahman, Qasim (Author) / Buneo, Christopher (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
154617-Thumbnail Image.png
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on

Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.

Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
ContributorsOlson, Markey (Author) / Helms-Tillery, Stephen (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2016
155864-Thumbnail Image.png
Description
The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects experienced both expected and unexpected

perturbations, with the task of minimizing

The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects experienced both expected and unexpected

perturbations, with the task of minimizing object roll. Unexpected perturbations were

controlled by switching weights between trials, expected perturbations were controlled by

asking subjects to rotate the object themselves. In all cases subjects were able to

minimize the roll of the object within three trials. Eye fixations were correlated with

object weight for the initial context and for known shifts in center of mass. In subsequent

trials with unexpected weight shifts, subjects appeared to scan areas of interest from both

contexts even after learning present orientation.
ContributorsSmith, Michael David (Author) / Santello, Marco (Thesis advisor) / Buneo, Christopher (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2017