Matching Items (108)
150828-Thumbnail Image.png
Description
Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode

Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode array, were used to understand how grip force, contact angle, object texture, and slip direction may be encoded in the sensor data. Findings show that slip induced under conditions of high contact angles and grip forces resulted in significant changes in both AC and DC pressure magnitude and rate of change in pressure. Slip induced under conditions of low contact angles and grip forces resulted in significant changes in the rate of change in electrode impedance. Slip in the distal direction of a precision grip caused significant changes in pressure magnitude and rate of change in pressure, while slip in the radial direction of the wrist caused significant changes in the rate of change in electrode impedance. A strong relationship was established between slip direction and the rate of change in ratios of electrode impedance for radial and ulnar slip relative to the wrist. Consequently, establishing multiple thresholds or establishing a multivariate model may be a useful method for detecting and characterizing slip. Detecting slip for low contact angles could be done by monitoring electrode data, while detecting slip for high contact angles could be done by monitoring pressure data. Predicting slip in the distal direction could be done by monitoring pressure data, while predicting slip in the radial and ulnar directions could be done by monitoring electrode data.
ContributorsHsia, Albert (Author) / Santos, Veronica J (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen I (Committee member) / Arizona State University (Publisher)
Created2012
151271-Thumbnail Image.png
Description
Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention.

Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention. Although some studies have described the biomechanics and neurophysiology of maneuvers, the underlying mechanisms that humans employ to control unsteady running are still not clear. My dissertation research investigated some of the biomechanical and behavioral strategies used for stable unsteady locomotion. First, I studied the behavioral level control of human sagittal plane running. I tested whether humans could control running using strategies consistent with simple and independent control laws that have been successfully used to control monopod robots. I found that humans use strategies that are consistent with the distributed feedback control strategies used by bouncing robots. Humans changed leg force rather than stance duration to control center of mass (COM) height. Humans adjusted foot placement relative to a "neutral point" to change running speed increment between consecutive flight phases, i.e. a "pogo-stick" rather than a "unicycle" strategy was adopted to change running speed. Body pitch angle was correlated by hip moments if a proportional-derivative relationship with time lags corresponding to pre-programmed reaction (87 ± 19 ms) was assumed. To better understand the mechanisms of performing successful maneuvers, I studied the functions of joints in the lower extremities to control COM speed and height. I found that during stance, the hip functioned as a power generator to change speed. The ankle switched between roles as a damper and torsional spring to contributing both to speed and elevation changes. The knee facilitated both speed and elevation control by absorbing mechanical energy, although its contribution was less than hip or ankle. Finally, I studied human turning in the horizontal plane. I used a morphological perturbation (increased body rotational inertia) to elicit compensational strategies used to control sidestep cutting turns. Humans use changes to initial body angular speed and body pre-rotation to prevent changes in braking forces.
ContributorsQiao, Mu, 1981- (Author) / Jindrich, Devin L (Thesis advisor) / Dounskaia, Natalia (Committee member) / Abbas, James (Committee member) / Hinrichs, Richard (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
153814-Thumbnail Image.png
Description
The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a

The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a larger influence on the motor state changes of another; the coupling is asymmetric. Three experiments demonstrated asymmetric coupling effects emerge when task or biomechanical asymmetries are imputed between actors. Participants coordinated in-phase (Ф =0o) swinging of handheld pendulums, which differed in their uncoupled eigenfrequencies (frequency detuning). Coupling effects were recovered through phase-amplitude modeling. Experiment 1 examined leader-follower coupling during a bidirectional task. Experiment 2 employed an additional coupling asymmetry by assigning an explicit leader and follower. Both experiment 1 and 2 demonstrated asymmetric coupling effects with increased detuning. In experiment 2, though, the explicit follower exhibited a phase lead in nearly all conditions. These results confirm that coupling direction was not determined strictly by relative phasing. A third experiment examined the question raised by the previous two, which is how could someone follow from ahead (i.e., phase lead in experiment 2). This was tested using a combination of frequency detuning and amplitude asymmetry requirements (e.g., 1:1 or 1:2 & 2:1). Results demonstrated larger amplitude movements drove the coupling towards the person with the smaller amplitude; small amplitude movements exhibited a phase lead, despite being a follower in coupling terms. These results suggest leader-follower coupling is a general property of social motor coordination. Predicting when such coupling effects occur is emphasized by the stability reducing effects of coordinating asymmetric components. Generally, the implication is role-taking is an emergent strategy of dividing up coordination stabilizing efforts unequally between actors (or limbs).
ContributorsFine, Justin (Author) / Amazeen, Eric L. (Thesis advisor) / Amazeen, Polemnia G. (Committee member) / Brewer, Gene (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2015
153889-Thumbnail Image.png
Description
Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array

Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon action potential firing rates. The effect of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis, and Mutual Information Maximization was compared based on SVM classification performance. SVM classification was used to examine the functional parameters of (i) efficacy (ii) endurance to simulated failure and (iii) longevity of classification. The effect of using isolated-neuron and multi-unit firing rates was compared as the feature vector supplied to the SVM. The best classification performance was on post-implantation day 36, when using multi-unit firing rates the worst classification accuracy resulted from features selected with Wilcoxon signed-rank test (51.12 ± 0.65%) and the best classification accuracy resulted from Mutual Information Maximization (93.74 ± 0.32%). On this day when using single-unit firing rates, the classification accuracy from the Wilcoxon signed-rank test was 88.85 ± 0.61 % and Mutual Information Maximization was 95.60 ± 0.52% (degrees of freedom =10, level of chance =10%)
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
155960-Thumbnail Image.png
Description
The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand

The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand use in everyday activities involving object manipulations, there is currently an incomplete understanding of the cortical sensorimotor mechanisms underlying this important behavior. One critical aspect of natural object grasping is the coordination of where the fingers make contact with an object and how much force is applied following contact. Such force-to-position modulation is critical for successful manipulation. However, the neural mechanisms underlying these motor processes remain less understood, as previous experiments have utilized protocols with fixed contact points which likely rely on different neural mechanisms from those involved in grasping at unconstrained contacts. To address this gap in the motor neuroscience field, transcranial magnetic stimulation (TMS) and electroencephalography (EEG) were used to investigate the role of primary motor cortex (M1), as well as other important cortical regions in the grasping network, during the planning and execution of object grasping and manipulation. The results of virtual lesions induced by TMS and EEG revealed grasp context-specific cortical mechanisms underlying digit force-to-position coordination, as well as the spatial and temporal dynamics of cortical activity during planning and execution. Together, the present findings provide the foundation for a novel framework accounting for how the central nervous system controls dexterous manipulation. This new knowledge can potentially benefit research in neuroprosthetics and improve the efficacy of neurorehabilitation techniques for patients affected by sensorimotor impairments.
ContributorsMcGurrin, Patrick M (Author) / Santello, Marco (Thesis advisor) / Helms-Tillery, Steve (Committee member) / Kleim, Jeff (Committee member) / Davare, Marco (Committee member) / Arizona State University (Publisher)
Created2017
155964-Thumbnail Image.png
Description
Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis

Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis users and non-amputee control subjects. This lack of sensitivity limits the ability of clinicians to make informed clinical decisions and presents challenges with insurance reimbursement for comprehensive clinical care and advanced prosthetic devices. These issues have directly impacted clinical care by restricting device options, increasing financial burden on clinics, and limiting support for research and development. This work aims to establish experimental methods and outcome measures that are more sensitive than traditional methods to balance and mobility changes in prosthesis users. Methods and analysis techniques were developed to probe aspects of balance and mobility control that may be specifically impacted by use of a prosthesis and present challenges similar to those experienced in daily life that could improve the detection of balance and mobility changes. Using the framework of cognitive resource allocation and dual-tasking, this work identified unique characteristics of prosthesis users’ postural control and developed sensitive measures of gait variability. The results also provide broader insight into dual-task analysis and the motor-cognitive response to demanding conditions. Specifically, this work identified altered motor behavior in prosthesis users and high cognitive demand of using a prosthesis. The residual standard deviation method was developed and demonstrated to be more effective than traditional gait variability measures at detecting the impact of dual-tasking. Additionally, spectral analysis of the center of pressure while standing identified altered somatosensory control in prosthesis users. These findings provide a new understanding of prosthetic use and new, highly sensitive techniques to assess balance and mobility in prosthesis users.
ContributorsHoward, Charla Lindley (Author) / Abbas, James (Thesis advisor) / Buneo, Christopher (Committee member) / Lynskey, Jim (Committee member) / Santello, Marco (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156093-Thumbnail Image.png
Description
Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings

Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance.

In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL.

Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well.

In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.
ContributorsVanGilder, Paul (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Muthuswamy, Jit (Committee member) / Foldes, Stephen (Committee member) / Arizona State University (Publisher)
Created2017
155910-Thumbnail Image.png
Description
The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward

The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward moving robotics into applications in unstructured environments. When humans cooperate with each other, often there are leader and follower roles. These roles may change during the task. This creates a need for the robotic system to be able to exchange roles with the human during a cooperative task. The unstructured nature of the new applications in the field creates a need for robotic systems to be able to interact in six degrees of freedom (DOF). Moreover, in these unstructured environments, the robotic system will have incomplete information. This means that it will sometimes perform an incorrect action and control methods need to be able to correct for this. However, the most compelling applications for robotics are where they have capabilities that the human does not, which also creates the need for robotic systems to be able to correct human action when it detects an error. Activity in the brain precedes human action. Utilizing this activity in the brain can classify the type of interaction desired by the human. For this dissertation, the cooperation between humans and robots is improved in two main areas. First, the ability for electroencephalogram (EEG) to determine the desired cooperation role with a human is demonstrated with a correct classification rate of 65%. Second, a robotic controller is developed to allow the human and robot to cooperate in six DOF with asymmetric role exchange. This system allowed human-robot cooperation to perform a cooperative task at 100% correct rate. High, medium, and low levels of robotic automation are shown to affect performance, with the human making the greatest numbers of errors when the robotic system has a medium level of automation.
ContributorsWhitsell, Bryan Douglas (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Polygerinos, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156157-Thumbnail Image.png
Description
Recently, it was demonstrated that startle-evoked-movements (SEMs) are present during individuated finger movements (index finger abduction), but only following intense training. This demonstrates that changes in motor planning, which occur through training (motor learning - a characteristic which can provide researchers and clinicians with information about overall rehabilitative effectiveness), can

Recently, it was demonstrated that startle-evoked-movements (SEMs) are present during individuated finger movements (index finger abduction), but only following intense training. This demonstrates that changes in motor planning, which occur through training (motor learning - a characteristic which can provide researchers and clinicians with information about overall rehabilitative effectiveness), can be analyzed with SEM. The objective here was to determine if SEM is a sensitive enough tool for differentiating expertise (task solidification) in a common everyday task (typing). If proven to be true, SEM may then be useful during rehabilitation for time-stamping when task-specific expertise has occurred, and possibly even when the sufficient dosage of motor training (although not tested here) has been delivered following impairment. It was hypothesized that SEM would be present for all fingers of an expert population, but no fingers of a non-expert population. A total of 9 expert (75.2 ± 9.8 WPM) and 8 non-expert typists, (41.6 ± 8.2 WPM) with right handed dominance and with no previous neurological or current upper extremity impairment were evaluated. SEM was robustly present (all p < 0.05) in all fingers of the experts (except the middle) and absent in all fingers of non-experts except the little (although less robust). Taken together, these results indicate that SEM is a measurable behavioral indicator of motor learning and that it is sensitive to task expertise, opening it for potential clinical utility.
ContributorsBartels, Brandon Michael (Author) / Honeycutt, Claire F (Thesis advisor) / Schaefer, Sydney (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2018
155935-Thumbnail Image.png
Description
Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to

Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to quantify the interaction between feedback- and feedforward control. This feature was applied on two grasp contexts: grasping the object at either (1) predetermined or (2) self-selected grasp locations (“constrained” and “unconstrained”, respectively), where unconstrained grasping is thought to involve feedback-driven force corrections to a greater extent than constrained grasping. This proposition was confirmed by force feature analysis. The second aim of this dissertation was to quantify whether force control mechanisms differ between dominant and non-dominant hands. The force feature analysis demonstrated that manipulation by the dominant hand relies on feedforward control more than the non-dominant hand. The third aim was to quantify coordination mechanisms underlying physical interaction by dyads in object manipulation. The results revealed that only individuals with worse solo performance benefit from interpersonal coordination through physical couplings, whereas the better individuals do not. This work showed that naturally emerging leader-follower roles, whereby the leader in dyadic manipulation exhibits significant greater force changes than the follower. Furthermore, brain activity measured through electroencephalography (EEG) could discriminate leader and follower roles as indicated power modulation in the alpha frequency band over centro-parietal areas. Lastly, this dissertation suggested that the relation between force and motion (arm impedance) could be an important means for communicating intended movement direction between biological agents.
ContributorsMojtahedi, Keivan (Author) / Santello, Marco (Thesis advisor) / Greger, Bradley (Committee member) / Artemiadis, Panagiotis (Committee member) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2017