This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 16
Filtering by

Clear all filters

151732-Thumbnail Image.png
Description
In order to successfully implement a neural prosthetic system, it is necessary to understand the control of limb movements and the representation of body position in the nervous system. As this development process continues, it is becoming increasingly important to understand the way multiple sensory modalities are used in limb

In order to successfully implement a neural prosthetic system, it is necessary to understand the control of limb movements and the representation of body position in the nervous system. As this development process continues, it is becoming increasingly important to understand the way multiple sensory modalities are used in limb representation. In a previous study, Shi et al. (2013) examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and held their arm at various target locations in a frontal plane. Visual feedback was withheld in half the trials, though non-visual (i.e. somatic) feedback was available in all trials. Previous analysis showed that some of the neurons were tuned to limb position and that some neurons had their response modulated by the presence or absence of visual feedback. This modulation manifested in decreases in firing rate variability in the vision condition as compared to nonvision. The decreases in firing rate variability, as shown through decreases in both the Fano factor of spike counts and the coefficient of variation of the inter-spike intervals, suggested that changes were taking place in both trial-by-trial and intra-trial variability. I sought to further probe the source of the change in intra-trial variability through spectral analysis. It was hypothesized that the presence of temporal structure in the vision condition would account for a regularity in firing that would have decreased intra-trial variability. While no peaks were apparent in the spectra, differences in spectral power between visual conditions were found. These differences are suggestive of unique temporal spiking patterns at the individual neuron level that may be influential at the population level.
ContributorsDyson, Keith (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen I (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152548-Thumbnail Image.png
Description
Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In

Humans are capable of transferring learning for anticipatory control of dexterous object manipulation despite changes in degrees-of-freedom (DoF), i.e., switching from lifting an object with two fingers to lifting the same object with three fingers. However, the role that tactile information plays in this transfer of learning is unknown. In this study, subjects lifted an L-shaped object with two fingers (2-DoF), and then lifted the object with three fingers (3-DoF). The subjects were divided into two groups--one group performed the task wearing a glove (to reduce tactile sensibility) upon the switch to 3-DoF (glove group), while the other group did not wear the glove (control group). Compensatory moment (torque) was used as a measure to determine how well the subject could minimize the tilt of the object following the switch from 2-DoF to 3-DoF. Upon the switch to 3-DoF, subjects wearing the glove generated a compensatory moment (Mcom) that had a significantly higher error than the average of the last five trials at the end of the 3-DoF block (p = 0.012), while the control subjects did not demonstrate a significant difference in Mcom. Additional effects of the reduction in tactile sensibility were: (1) the grip force for the group of subjects wearing the glove was significantly higher in the 3-DoF trials compared to the 2-DoF trials (p = 0.014), while the grip force of the control subjects was not significantly different; (2) the difference in centers of pressure between the thumb and fingers (ΔCoP) significantly increased in the 3-DoF block for the group of subjects wearing the glove, while the ΔCoP of the control subjects was not significantly different; (3) lastly, the control subjects demonstrated a greater increase in lift force than the group of subjects wearing the glove (though results were not significant). Combined together, these results suggest different force modulation strategies are used depending on the amount of tactile feedback that is available to the subject. Therefore, reduction of tactile sensibility has important effects on subjects' ability to transfer learned manipulation across different DoF contexts.
ContributorsGaw, Nathan (Author) / Helms Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2014
Description
Peripheral Vascular Disease (PVD) is a debilitating chronic disease of the lower extremities particularly affecting older adults and diabetics. It results in reduction of the blood flow to peripheral tissue and sometimes causing tissue damage such that PVD patients suffer from pain in the lower legs, thigh and buttocks after

Peripheral Vascular Disease (PVD) is a debilitating chronic disease of the lower extremities particularly affecting older adults and diabetics. It results in reduction of the blood flow to peripheral tissue and sometimes causing tissue damage such that PVD patients suffer from pain in the lower legs, thigh and buttocks after activities. Electrical neurostimulation based on the "Gate Theory of Pain" is a known to way to reduce pain but current devices to do this are bulky and not well suited to implantation in peripheral tissues. There is also an increased risk associated with surgery which limits the use of these devices. This research has designed and constructed wireless ultrasound powered microstimulators that are much smaller and injectable and so involve less implantation trauma. These devices are small enough to fit through an 18 gauge syringe needle increasing their potential for clinical use. These piezoelectric microdevices convert mechanical energy into electrical energy that then is used to block pain. The design and performance of these miniaturized devices was modeled by computer while constructed devices were evaluated in animal experiments. The devices are capable of producing 500ms pulses with an intensity of 2 mA into a 2 kilo-ohms load. Using the rat as an animal model, a series of experiments were conducted to evaluate the in-vivo performance of the devices.
ContributorsZong, Xi (Author) / Towe, Bruce (Thesis advisor) / Kleim, Jeffrey (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
151088-Thumbnail Image.png
Description
Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with

Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with improved ankle kinematics and kinetics. The ankle allows a person to walk normally and up and down stairs, but volitional control is still an issue. This research tackled the problem of giving the user more control over the prosthetic ankle using a force/torque circuit. When the user presses against a force/torque sensor located inside the socket the prosthetic foot plantar flexes or moves downward. This will help the user add additional push-off force when walking up slopes or stairs. It also gives the user a sense of control over the device.
ContributorsFronczyk, Adam (Author) / Sugar, Thomas G. (Thesis advisor) / Helms-Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
150828-Thumbnail Image.png
Description
Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode

Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode array, were used to understand how grip force, contact angle, object texture, and slip direction may be encoded in the sensor data. Findings show that slip induced under conditions of high contact angles and grip forces resulted in significant changes in both AC and DC pressure magnitude and rate of change in pressure. Slip induced under conditions of low contact angles and grip forces resulted in significant changes in the rate of change in electrode impedance. Slip in the distal direction of a precision grip caused significant changes in pressure magnitude and rate of change in pressure, while slip in the radial direction of the wrist caused significant changes in the rate of change in electrode impedance. A strong relationship was established between slip direction and the rate of change in ratios of electrode impedance for radial and ulnar slip relative to the wrist. Consequently, establishing multiple thresholds or establishing a multivariate model may be a useful method for detecting and characterizing slip. Detecting slip for low contact angles could be done by monitoring electrode data, while detecting slip for high contact angles could be done by monitoring pressure data. Predicting slip in the distal direction could be done by monitoring pressure data, while predicting slip in the radial and ulnar directions could be done by monitoring electrode data.
ContributorsHsia, Albert (Author) / Santos, Veronica J (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen I (Committee member) / Arizona State University (Publisher)
Created2012
156157-Thumbnail Image.png
Description
Recently, it was demonstrated that startle-evoked-movements (SEMs) are present during individuated finger movements (index finger abduction), but only following intense training. This demonstrates that changes in motor planning, which occur through training (motor learning - a characteristic which can provide researchers and clinicians with information about overall rehabilitative effectiveness), can

Recently, it was demonstrated that startle-evoked-movements (SEMs) are present during individuated finger movements (index finger abduction), but only following intense training. This demonstrates that changes in motor planning, which occur through training (motor learning - a characteristic which can provide researchers and clinicians with information about overall rehabilitative effectiveness), can be analyzed with SEM. The objective here was to determine if SEM is a sensitive enough tool for differentiating expertise (task solidification) in a common everyday task (typing). If proven to be true, SEM may then be useful during rehabilitation for time-stamping when task-specific expertise has occurred, and possibly even when the sufficient dosage of motor training (although not tested here) has been delivered following impairment. It was hypothesized that SEM would be present for all fingers of an expert population, but no fingers of a non-expert population. A total of 9 expert (75.2 ± 9.8 WPM) and 8 non-expert typists, (41.6 ± 8.2 WPM) with right handed dominance and with no previous neurological or current upper extremity impairment were evaluated. SEM was robustly present (all p < 0.05) in all fingers of the experts (except the middle) and absent in all fingers of non-experts except the little (although less robust). Taken together, these results indicate that SEM is a measurable behavioral indicator of motor learning and that it is sensitive to task expertise, opening it for potential clinical utility.
ContributorsBartels, Brandon Michael (Author) / Honeycutt, Claire F (Thesis advisor) / Schaefer, Sydney (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2018
154617-Thumbnail Image.png
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on

Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.

Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
ContributorsOlson, Markey (Author) / Helms-Tillery, Stephen (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2016
Description
Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have

Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have investigated the cues (auditory, visual and haptic) that support these interactions but understanding how these unconscious interactions happen even without those cues is yet to be explained. To address this issue, in this study, a paradigm that tests the parallel efforts of pairs of individuals (dyads) to complete a jointly performed virtual reaching task, without any auditory or visual information exchange was employed. Motion was tracked with a NDI OptoTrak 3D motion tracking system that captured each subject’s movement kinematics, through which we could measure the level of synchronization between two subjects in space and time. For the spatial analyses, the movement amplitudes and direction errors at peak velocities and at endpoints were analyzed. Significant differences in the movement amplitudes were found for subjects in 4 out of 6 dyads which were expected due to the lack of feedback between the subjects. Interestingly, subjects in this study also planned their movements in different directions in order to counteract the visuomotor rotation offered in the test blocks, which suggests the difference in strategies for the subjects in each dyad. Also, the level of de-adaptation in the control blocks in which no visuomotor rotation was offered to the subjects was measured. To further validate the results obtained through spatial analyses, a temporal analyses was done in which the movement times for the two subjects were compared. With the help of these results, numerous interaction scenarios that are possible in the human joint actions in without feedback were analyzed.
ContributorsAgrawal, Ankit (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Tillery, Stephen Helms (Committee member) / Arizona State University (Publisher)
Created2016