This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 13
Filtering by

Clear all filters

Description
Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly

Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly to the brain, providing feedback about features of objects in contact with the prosthetic. To achieve this goal, multiple simultaneous streams of information will need to be encoded by ICMS in a manner that produces robust, reliable, and discriminable sensations. The first segment of this work focuses on the discriminability of sensations elicited by ICMS within somatosensory cortex. Stimulation on multiple single electrodes and near-simultaneous stimulation across multiple electrodes, driven by a multimodal tactile sensor, were both used in these experiments. A SynTouch BioTac sensor was moved across a flat surface in several directions, and a subset of the sensor's electrode impedance channels were used to drive multichannel ICMS in the somatosensory cortex of a non-human primate. The animal performed a behavioral task during this stimulation to indicate the discriminability of sensations evoked by the electrical stimulation. The animal's responses to ICMS were somewhat inconsistent across experimental sessions but indicated that discriminable sensations were evoked by both single and multichannel ICMS. The factors that affect the discriminability of stimulation-induced sensations are not well understood, in part because the relationship between ICMS and the neural activity it induces is poorly defined. The second component of this work was to develop computational models that describe the populations of neurons likely to be activated by ICMS. Models of several neurons were constructed, and their responses to ICMS were calculated. A three-dimensional cortical model was constructed using these cell models and used to identify the populations of neurons likely to be recruited by ICMS. Stimulation activated neurons in a sparse and discontinuous fashion; additionally, the type, number, and location of neurons likely to be activated by stimulation varied with electrode depth.
ContributorsOverstreet, Cynthia K (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica (Committee member) / Buneo, Christopher (Committee member) / Otto, Kevin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152687-Thumbnail Image.png
Description
Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not

Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not well understood. In this dissertation, such learning is analyzed by means of single unit neural recordings in the rats' motor agranular medial (AGm) and agranular lateral (AGl) while the rats learned to perform a directional choice task. Multichannel chronic recordings using implanted microelectrodes in the rat's brain were essential to this study. Also for fundamental scientific investigations in general and for some applications such as brain machine interface, the recorded neural waveforms need to be analyzed first to identify neural action potentials as basic computing units. Prior to analyzing and modeling the recorded neural signals, this dissertation proposes an advanced spike sorting system, the M-Sorter, to extract the action potentials from raw neural waveforms. The M-Sorter shows better or comparable performance compared with two other popular spike sorters under automatic mode. With the sorted action potentials in place, neuronal activity in the AGm and AGl areas in rats during learning of a directional choice task is examined. Systematic analyses suggest that rat's neural activity in AGm and AGl was modulated by previous trial outcomes during learning. Single unit based neural dynamics during task learning are described in detail in the dissertation. Furthermore, the differences in neural modulation between fast and slow learning rats were compared. The results show that the level of neural modulation of previous trial outcome is different in fast and slow learning rats which may in turn suggest an important role of previous trial outcome encoding in learning.
ContributorsYuan, Yu'an (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Chae, Junseok (Committee member) / Arizona State University (Publisher)
Created2014
152691-Thumbnail Image.png
Description
Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and

Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task.
ContributorsMao, Hongwei (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Cao, Yu (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
155964-Thumbnail Image.png
Description
Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis

Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis users and non-amputee control subjects. This lack of sensitivity limits the ability of clinicians to make informed clinical decisions and presents challenges with insurance reimbursement for comprehensive clinical care and advanced prosthetic devices. These issues have directly impacted clinical care by restricting device options, increasing financial burden on clinics, and limiting support for research and development. This work aims to establish experimental methods and outcome measures that are more sensitive than traditional methods to balance and mobility changes in prosthesis users. Methods and analysis techniques were developed to probe aspects of balance and mobility control that may be specifically impacted by use of a prosthesis and present challenges similar to those experienced in daily life that could improve the detection of balance and mobility changes. Using the framework of cognitive resource allocation and dual-tasking, this work identified unique characteristics of prosthesis users’ postural control and developed sensitive measures of gait variability. The results also provide broader insight into dual-task analysis and the motor-cognitive response to demanding conditions. Specifically, this work identified altered motor behavior in prosthesis users and high cognitive demand of using a prosthesis. The residual standard deviation method was developed and demonstrated to be more effective than traditional gait variability measures at detecting the impact of dual-tasking. Additionally, spectral analysis of the center of pressure while standing identified altered somatosensory control in prosthesis users. These findings provide a new understanding of prosthetic use and new, highly sensitive techniques to assess balance and mobility in prosthesis users.
ContributorsHoward, Charla Lindley (Author) / Abbas, James (Thesis advisor) / Buneo, Christopher (Committee member) / Lynskey, Jim (Committee member) / Santello, Marco (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156964-Thumbnail Image.png
Description
Proprioception is the sense of body position, movement, force, and effort. Loss of proprioception can affect planning and control of limb and body movements, negatively impacting activities of daily living and quality of life. Assessments employing planar robots have shown that proprioceptive sensitivity is directionally dependent within the horizontal plane

Proprioception is the sense of body position, movement, force, and effort. Loss of proprioception can affect planning and control of limb and body movements, negatively impacting activities of daily living and quality of life. Assessments employing planar robots have shown that proprioceptive sensitivity is directionally dependent within the horizontal plane however, few studies have looked at proprioceptive sensitivity in 3d space. In addition, the extent to which proprioceptive sensitivity is modifiable by factors such as exogenous neuromodulation is unclear. To investigate proprioceptive sensitivity in 3d we developed a novel experimental paradigm employing a 7-DoF robot arm, which enables reliable testing of arm proprioception along arbitrary paths in 3d space, including vertical motion which has previously been neglected. A participant’s right arm was coupled to a trough held by the robot that stabilized the wrist and forearm, allowing for changes in configuration only at the elbow and shoulder. Sensitivity to imposed displacements of the endpoint of the arm were evaluated using a “same/different” task, where participant’s hands were moved 1-4 cm from a previously visited reference position. A measure of sensitivity (d’) was compared across 6 movement directions and between 2 postures. For all directions, sensitivity increased monotonically as the distance from the reference location increased. Sensitivity was also shown to be anisotropic (directionally dependent) which has implications for our understanding of the planning and control of reaching movements in 3d space.

The effect of neuromodulation on proprioceptive sensitivity was assessed using transcutaneous electrical nerve stimulation (TENS), which has been shown to have beneficial effects on human cognitive and sensorimotor performance in other contexts. In this pilot study the effects of two frequencies (30hz and 300hz) and three electrode configurations were examined. No effect of electrode configuration was found, however sensitivity with 30hz stimulation was significantly lower than with 300hz stimulation (which was similar to sensitivity without stimulation). Although TENS was shown to modulate proprioceptive sensitivity, additional experiments are required to determine if TENS can produce enhancement rather than depression of sensitivity which would have positive implications for rehabilitation of proprioceptive deficits arising from stroke and other disorders.
ContributorsKlein, Joshua (Author) / Buneo, Christopher (Thesis advisor) / Helms-Tillery, Stephen (Committee member) / Kleim, Jeffrey (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2018
156944-Thumbnail Image.png
Description
Neural interfacing applications have advanced in complexity, with needs for increasingly high degrees of freedom in prosthetic device control, sharper discrimination in sensory percepts in bidirectional interfaces, and more precise localization of functional connectivity in the brain. As such, there is a growing need for reliable neurophysiological recordings at a

Neural interfacing applications have advanced in complexity, with needs for increasingly high degrees of freedom in prosthetic device control, sharper discrimination in sensory percepts in bidirectional interfaces, and more precise localization of functional connectivity in the brain. As such, there is a growing need for reliable neurophysiological recordings at a fine spatial scale matching that of cortical columnar processing. Penetrating microelectrodes provide localization sufficient to isolate action potential (AP) waveforms, but often suffer from recorded signal deterioration linked to foreign body response. Micro-Electrocorticography (μECoG) surface electrodes elicit lower foreign body response and show greater chronic stability of recorded signals, though they typically lack the signal localization necessary to isolate individual APs. This dissertation validates the recording capacity of a novel, flexible, large area μECoG array with bilayer routing in a feline implant, and explores the ability of conventional μECoG arrays to detect features of neuronal activity in a very high frequency band associated with AP waveforms.

Recordings from both layers of the flexible μECoG array showed frequency features typical of cortical local field potentials (LFP) and were shown to be stable in amplitude over time. Recordings from both layers also showed consistent, frequency-dependent modulation after induction of general anesthesia, with large increases in beta and gamma band and decreases in theta band observed over three experiments. Recordings from conventional μECoG arrays over human cortex showed robust modulation in a high frequency (250-2000 Hz) band upon production of spoken words. Modulation in this band was used to predict spoken words with over 90% accuracy. Basal Ganglia neuronal AP firing was also shown to significantly correlate with various cortical μECoG recordings in this frequency band. Results indicate that μECoG surface electrodes may detect high frequency neuronal activity potentially associated with AP firing, a source of information previously unutilized by these devices.
ContributorsBarton, Cody David (Author) / Greger, Bradley (Thesis advisor, Committee member) / Santello, Marco (Committee member) / Buneo, Christopher (Committee member) / Graudejus, Oliver (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2018
154617-Thumbnail Image.png
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on

Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.

Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
ContributorsOlson, Markey (Author) / Helms-Tillery, Stephen (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2016
154148-Thumbnail Image.png
Description
Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this

Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this control. In short, learning became a key component in controlling these systems. As a result, BMIs have become ideal tools to probe and explore brain activity, since they allow the isolation of neural inputs and systematic altering of the relationships between the neural signals and output. I have used BMIs to explore the process of brain adaptability in a motor-like task. To this end, I trained non-human primates to control a 3D cursor and adapt to two different perturbations: a visuomotor rotation, uniform across the neural ensemble, and a decorrelation task, which non-uniformly altered the relationship between the activity of particular neurons in an ensemble and movement output. I measured individual and population level changes in the neural ensemble as subjects honed their skills over the span of several days. I found some similarities in the adaptation process elicited by these two tasks. On one hand, individual neurons displayed tuning changes across the entire ensemble after task adaptation: most neurons displayed transient changes in their preferred directions, and most neuron pairs showed changes in their cross-correlations during the learning process. On the other hand, I also measured population level adaptation in the neural ensemble: the underlying neural manifolds that control these neural signals also had dynamic changes during adaptation. I have found that the neural circuits seem to apply an exploratory strategy when adapting to new tasks. Our results suggest that information and trajectories in the neural space increase after initially introducing the perturbations, and before the subject settles into workable solutions. These results provide new insights into both the underlying population level processes in motor learning, and the changes in neural coding which are necessary for subjects to learn to control neuroprosthetics. Understanding of these mechanisms can help us create better control algorithms, and design training paradigms that will take advantage of these processes.
ContributorsArmenta Salas, Michelle (Author) / Helms Tillery, Stephen I (Thesis advisor) / Si, Jennie (Committee member) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2015
Description
Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have

Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have investigated the cues (auditory, visual and haptic) that support these interactions but understanding how these unconscious interactions happen even without those cues is yet to be explained. To address this issue, in this study, a paradigm that tests the parallel efforts of pairs of individuals (dyads) to complete a jointly performed virtual reaching task, without any auditory or visual information exchange was employed. Motion was tracked with a NDI OptoTrak 3D motion tracking system that captured each subject’s movement kinematics, through which we could measure the level of synchronization between two subjects in space and time. For the spatial analyses, the movement amplitudes and direction errors at peak velocities and at endpoints were analyzed. Significant differences in the movement amplitudes were found for subjects in 4 out of 6 dyads which were expected due to the lack of feedback between the subjects. Interestingly, subjects in this study also planned their movements in different directions in order to counteract the visuomotor rotation offered in the test blocks, which suggests the difference in strategies for the subjects in each dyad. Also, the level of de-adaptation in the control blocks in which no visuomotor rotation was offered to the subjects was measured. To further validate the results obtained through spatial analyses, a temporal analyses was done in which the movement times for the two subjects were compared. With the help of these results, numerous interaction scenarios that are possible in the human joint actions in without feedback were analyzed.
ContributorsAgrawal, Ankit (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Tillery, Stephen Helms (Committee member) / Arizona State University (Publisher)
Created2016