Matching Items (35)
Filtering by
- All Subjects: Biomedical Engineering
- Genre: Academic theses
- Creators: Santello, Marco
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.
Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
ContributorsOlson, Markey (Author) / Helms-Tillery, Stephen (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2016
Description
Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this control. In short, learning became a key component in controlling these systems. As a result, BMIs have become ideal tools to probe and explore brain activity, since they allow the isolation of neural inputs and systematic altering of the relationships between the neural signals and output. I have used BMIs to explore the process of brain adaptability in a motor-like task. To this end, I trained non-human primates to control a 3D cursor and adapt to two different perturbations: a visuomotor rotation, uniform across the neural ensemble, and a decorrelation task, which non-uniformly altered the relationship between the activity of particular neurons in an ensemble and movement output. I measured individual and population level changes in the neural ensemble as subjects honed their skills over the span of several days. I found some similarities in the adaptation process elicited by these two tasks. On one hand, individual neurons displayed tuning changes across the entire ensemble after task adaptation: most neurons displayed transient changes in their preferred directions, and most neuron pairs showed changes in their cross-correlations during the learning process. On the other hand, I also measured population level adaptation in the neural ensemble: the underlying neural manifolds that control these neural signals also had dynamic changes during adaptation. I have found that the neural circuits seem to apply an exploratory strategy when adapting to new tasks. Our results suggest that information and trajectories in the neural space increase after initially introducing the perturbations, and before the subject settles into workable solutions. These results provide new insights into both the underlying population level processes in motor learning, and the changes in neural coding which are necessary for subjects to learn to control neuroprosthetics. Understanding of these mechanisms can help us create better control algorithms, and design training paradigms that will take advantage of these processes.
ContributorsArmenta Salas, Michelle (Author) / Helms Tillery, Stephen I (Thesis advisor) / Si, Jennie (Committee member) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2015
Description
Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have investigated the cues (auditory, visual and haptic) that support these interactions but understanding how these unconscious interactions happen even without those cues is yet to be explained. To address this issue, in this study, a paradigm that tests the parallel efforts of pairs of individuals (dyads) to complete a jointly performed virtual reaching task, without any auditory or visual information exchange was employed. Motion was tracked with a NDI OptoTrak 3D motion tracking system that captured each subject’s movement kinematics, through which we could measure the level of synchronization between two subjects in space and time. For the spatial analyses, the movement amplitudes and direction errors at peak velocities and at endpoints were analyzed. Significant differences in the movement amplitudes were found for subjects in 4 out of 6 dyads which were expected due to the lack of feedback between the subjects. Interestingly, subjects in this study also planned their movements in different directions in order to counteract the visuomotor rotation offered in the test blocks, which suggests the difference in strategies for the subjects in each dyad. Also, the level of de-adaptation in the control blocks in which no visuomotor rotation was offered to the subjects was measured. To further validate the results obtained through spatial analyses, a temporal analyses was done in which the movement times for the two subjects were compared. With the help of these results, numerous interaction scenarios that are possible in the human joint actions in without feedback were analyzed.
ContributorsAgrawal, Ankit (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Tillery, Stephen Helms (Committee member) / Arizona State University (Publisher)
Created2016
Description
In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Crook, Sharon (Committee member) / Arizona State University (Publisher)
Created2017
Description
Prosthetic users abandon devices due to difficulties performing tasks without proper graded or interpretable feedback. The inability to adequately detect and correct error of the device leads to failure and frustration. In advanced prostheses, peripheral nerve stimulation can be used to deliver sensations, but standard schemes used in sensorized prosthetic systems induce percepts inconsistent with natural sensations, providing limited benefit. Recent uses of time varying stimulation strategies appear to produce more practical sensations, but without a clear path to pursue improvements. This dissertation examines the use of physiologically based stimulation strategies to elicit sensations that are more readily interpretable. A psychophysical experiment designed to investigate sensitivities to the discrimination of perturbation direction within precision grip suggests that perception is biomechanically referenced: increased sensitivities along the ulnar-radial axis align with potential anisotropic deformation of the finger pad, indicating somatosensation uses internal information rather than environmental. Contact-site and direction dependent deformation of the finger pad activates complimentary fast adapting and slow adapting mechanoreceptors, exhibiting parallel activity of the two associate temporal patterns: static and dynamic. The spectrum of temporal activity seen in somatosensory cortex can be explained by a combined representation of these distinct response dynamics, a phenomenon referred in this dissertation to “biphasic representation.” In a reach-to-precision-grasp task, neurons in somatosensory cortex were found to possess biphasic firing patterns in their responses to texture, orientation, and movement. Sensitivities seem to align with variable deformation and mechanoreceptor activity: movement and smooth texture responses align with potential fast adapting activation, non-movement and coarse texture responses align with potential increased slow adapting activation, and responses to orientation are conceptually consistent with coding of tangential load. Using evidence of biphasic representations’ association with perceptual priorities, gamma band phase locking is used to compare responses to peripheral nerve stimulation patterns and mechanical stimulation. Vibrotactile and punctate mechanical stimuli are used to represent the practical and impractical percepts commonly observed in peripheral nerve stimulation feedback. Standard patterns of constant parameters closely mimic impractical vibrotactile stimulation while biphasic patterns better mimic punctate stimulation and provide a platform to investigate intragrip dynamics representing contextual activation.
ContributorsTanner, Justin Cody (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica J (Committee member) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Buneo, Christopher A (Committee member) / Arizona State University (Publisher)
Created2017
Description
Information processing in the brain is mediated by network interactions between anatomically distant (centimeters apart) regions of cortex and network action is fundamental to human behavior. Disruptive activity of these networks may allow a variety of diseases to develop. Degradation or loss of network function in the brain can affect many aspects of the human experience; motor disorder, language difficulties, memory loss, mood swings, and more. The cortico-basal ganglia loop is a system of networks in the brain between the cortex, basal ganglia, the thalamus, and back to the cortex. It is not one singular circuit, but rather a series of parallel circuits that are relevant towards motor output, motor planning, and motivation and reward. Studying the relationship between basal ganglia neurons and cortical local field potentials may lead to insights about neurodegenerative diseases and how these diseases change the cortico-basal ganglia circuit. Speech and language are uniquely human and require the coactivation of several brain regions. The various aspects of language are spread over the temporal lobe and parts of the occipital, parietal, and frontal lobe. However, the core network for speech production involves collaboration between phonologic retrieval (encoding ideas into syllabic representations) from Wernicke’s area, and phonemic encoding (translating syllables into motor articulations) from Broca’s area. Studying the coactivation of these brain regions during a repetitive speech production task may lead to a greater understanding of their electrophysiological functional connectivity. The primary purpose of the work presented in this document is to validate the use of subdural microelectrodes in electrophysiological functional connectivity research as these devices best match the spatial and temporal scales of brain activity. Neuron populations in the cortex are organized into functional units called cortical columns. These cortical columns operate on the sub-millisecond temporal and millimeter spatial scale. The study of brain networks, both in healthy and unwell individuals, may reveal new methodologies of treatment or management for disease and injury, as well as contribute to our scientific understanding of how the brain works.
ContributorsO'Neill, Kevin John (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Papandreou-Suppapola, Antonia (Committee member) / Kleim, Jeffery (Committee member) / Arizona State University (Publisher)
Created2021
Description
A current thrust in neurorehabilitation research involves exogenous neuromodulation of peripheral nerves to enhance neuroplasticity and maximize recovery of function. This dissertation presents the results of four experiments aimed at assessing the effects of trigeminal nerve stimulation (TNS) and occipital nerve stimulation (ONS) on motor learning, which was behaviorally characterized using an upper extremity visuomotor adaptation paradigm. In Aim 1a, the effects of offline TNS using clinically tested frequencies (120 and 60 Hz) were characterized. Sixty-three participants (22.75±4.6 y/o), performed a visuomotor rotation task and received TNS before encountering rotation of hand visual feedback. In Aim 1b, TNS at 3 kHz, which has been shown to be more tolerable at higher current intensities, was evaluated in 42 additional subjects (23.4±4.6 y/o). Results indicated that 3 kHz stimulation accelerated learning while 60 Hz stimulation slowed learning, suggesting a frequency-dependent effect on learning. In Aim 2, the effect of online TNS using 120 and 60 Hz were characterized to determine if this protocol would deliver better outcomes. Sixty-three participants (23.2±3.9 y/o) received either TNS or sham concurrently with perturbed visual feedback. Results showed no significant differences among groups. However, a cross-study comparison of results obtained with 60 Hz offline TNS showed a statistically significant improvement in learning rates with online stimulation relative to offline, suggesting a timing-dependent effect on learning. In Aim 3, TNS and ONS were compared using the best protocol from previous aims (offline 3 kHz). Additionally, concurrent stimulation of both nerves was explored to look for potential synergistic effects. Eighty-four participants (22.9±3.2 y/o) were assigned to one of four groups: TNS, ONS, TNS+ONS, and sham. Visual inspection of learning curves revealed that the ONS group demonstrated the fastest learning among groups. However, statistical analyses did not confirm this observation. In addition, the TNS+ONS group appeared to learn faster than the sham and TNS groups but slower than the ONS only group, suggesting no synergistic effects using this protocol, as initially hypothesized.
The results provide new information on the potential use of TNS and ONS in neurorehabilitation and performance enhancement in the motor domain.
ContributorsArias, Diego (Author) / Buneo, Christopher (Thesis advisor) / Schaefer, Sydney (Committee member) / Helms-Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2023
Description
Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed to study the neural mechanisms of multisensory integration for the upper limb during motor planning. The platform allows for selection of different arms and manipulation of the locations of physical and virtual target cues in the environment. The system was tested with two non-human primates (NHP) trained to reach to multiple virtual targets. Arm kinematic data as well as neural spiking data from primary motor (M1) and dorsal premotor cortex (PMd) were collected. The task involved manipulating visual information about initial arm position by rendering the virtual avatar arm in either its actual position (veridical (V) condition) or in a different shifted (e.g., small vs large shifts) position (perturbed (P) condition) prior to movement. Tactile feedback was modulated in blocks by placing or removing the physical start cue on the table (tactile (T), and no-tactile (NT) conditions, respectively). Behaviorally, errors in initial movement direction were larger when the physical start cue was absent. Slightly larger directional errors were found in the P condition compared to the V condition for some movement directions. Both effects were consistent with the idea that erroneous or reduced information about initial hand location led to movement direction-dependent reach planning errors. Neural correlates of these behavioral effects were probed using population decoding techniques. For small shifts in the visual position of the arm, no differences in decoding accuracy between the T and NT conditions were observed in either M1 or PMd. However, for larger visual shifts, decoding accuracy decreased in the NT condition, but only in PMd. Thus, activity in PMd, but not M1, may reflect the uncertainty in reach planning that results when sensory cues regarding initial hand position are erroneous or absent.
ContributorsPhataraphruk, Preyaporn Kris (Author) / Buneo, Christopher A (Thesis advisor) / Zhou, Yi (Committee member) / Helms Tillery, Steve (Committee member) / Greger, Bradley (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
Description
The human hand comprises complex sensorimotor functions that can be impaired by neurological diseases and traumatic injuries. Effective rehabilitation can bring the impaired hand back to a functional state because of the plasticity of the central nervous system to relearn and remodel the lost synapses in the brain. Current rehabilitation therapies focus on strengthening motor skills, such as grasping, employ multiple objects of varying stiffness and devices that are bulky, costly, and have limited range of stiffness due to the rigid mechanisms employed in their variable stiffness actuators. This research project presents a portable cost-effective soft robotic haptic device with a broad stiffness range that is adjustable and can be utilized in both clinical and home settings. The device eliminates the need for multiple objects by employing a pneumatic soft structure made with highly compliant materials that act as the actuator as well as the structure of the haptic interface. It is made with interchangeable soft elastomeric sleeves that can be customized to include materials of varying stiffness to increase or decrease the stiffness range. The device is fabricated using existing 3D printing technologies, and polymer molding and casting techniques, thus keeping the cost low and throughput high. The haptic interface is linked to either an open-loop system that allows for an increased pressure during usage or closed-loop system that provides pressure regulation in accordance with the stiffness the user specifies. A preliminary evaluation is performed to characterize the effective controllable region of variance in stiffness. Results indicate that the region of controllable stiffness was in the center of the device, where the stiffness appeared to plateau with each increase in pressure. The two control systems are tested to derive relationships between internal pressure, grasping force exertion on the surface, and displacement using multiple probing points on the haptic device. Additional quantitative evaluation is performed with study participants and juxtaposed to a qualitative analysis to ensure adequate perception in compliance variance. Finally, a qualitative evaluation showed that greater than 60% of the trials resulted in the correct perception of stiffness in the haptic device.
ContributorsSebastian, Frederick (Author) / Polygerinos, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Fu, Qiushi (Committee member) / Arizona State University (Publisher)
Created2018
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
ContributorsRincon Gonzalez, Liliana (Author) / Helms Tillery, Stephen I (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012