Matching Items (40)
Filtering by

Clear all filters

150222-Thumbnail Image.png
Description
An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space

An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space and the consequences of misestimation of limb position on movements. Two independent but related studies were performed. The first involved characterizing the neural mechanisms of limb position estimation in the non-human primate brain. Single unit recordings were obtained in area 5 of the posterior parietal cortex in order to examine the role of this area in estimating limb position based on visual and somatic signals (proprioceptive, efference copy). When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons were modulated by visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level. The second part of this dissertation focused on the consequences of misestimation of limb position for movement production. It is well known that limb movements are inherently variable. This variability could be the result of noise arising at one or more stages of movement production. Here we used biomechanical modeling and simulation techniques to characterize movement variability resulting from noise in estimating limb position ('sensing noise') and in planning required movement vectors ('planning noise'), and compared that to the variability expected due to noise in movement execution. We found that the effects of sensing and planning related noise on movement variability were dependent upon both the planned movement direction and the initial configuration of the arm and were different in many respects from the effects of execution noise.
ContributorsShi, Ying (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / He, Jiping (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2011
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
151271-Thumbnail Image.png
Description
Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention.

Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention. Although some studies have described the biomechanics and neurophysiology of maneuvers, the underlying mechanisms that humans employ to control unsteady running are still not clear. My dissertation research investigated some of the biomechanical and behavioral strategies used for stable unsteady locomotion. First, I studied the behavioral level control of human sagittal plane running. I tested whether humans could control running using strategies consistent with simple and independent control laws that have been successfully used to control monopod robots. I found that humans use strategies that are consistent with the distributed feedback control strategies used by bouncing robots. Humans changed leg force rather than stance duration to control center of mass (COM) height. Humans adjusted foot placement relative to a "neutral point" to change running speed increment between consecutive flight phases, i.e. a "pogo-stick" rather than a "unicycle" strategy was adopted to change running speed. Body pitch angle was correlated by hip moments if a proportional-derivative relationship with time lags corresponding to pre-programmed reaction (87 ± 19 ms) was assumed. To better understand the mechanisms of performing successful maneuvers, I studied the functions of joints in the lower extremities to control COM speed and height. I found that during stance, the hip functioned as a power generator to change speed. The ankle switched between roles as a damper and torsional spring to contributing both to speed and elevation changes. The knee facilitated both speed and elevation control by absorbing mechanical energy, although its contribution was less than hip or ankle. Finally, I studied human turning in the horizontal plane. I used a morphological perturbation (increased body rotational inertia) to elicit compensational strategies used to control sidestep cutting turns. Humans use changes to initial body angular speed and body pre-rotation to prevent changes in braking forces.
ContributorsQiao, Mu, 1981- (Author) / Jindrich, Devin L (Thesis advisor) / Dounskaia, Natalia (Committee member) / Abbas, James (Committee member) / Hinrichs, Richard (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
153814-Thumbnail Image.png
Description
The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a

The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a larger influence on the motor state changes of another; the coupling is asymmetric. Three experiments demonstrated asymmetric coupling effects emerge when task or biomechanical asymmetries are imputed between actors. Participants coordinated in-phase (Ф =0o) swinging of handheld pendulums, which differed in their uncoupled eigenfrequencies (frequency detuning). Coupling effects were recovered through phase-amplitude modeling. Experiment 1 examined leader-follower coupling during a bidirectional task. Experiment 2 employed an additional coupling asymmetry by assigning an explicit leader and follower. Both experiment 1 and 2 demonstrated asymmetric coupling effects with increased detuning. In experiment 2, though, the explicit follower exhibited a phase lead in nearly all conditions. These results confirm that coupling direction was not determined strictly by relative phasing. A third experiment examined the question raised by the previous two, which is how could someone follow from ahead (i.e., phase lead in experiment 2). This was tested using a combination of frequency detuning and amplitude asymmetry requirements (e.g., 1:1 or 1:2 & 2:1). Results demonstrated larger amplitude movements drove the coupling towards the person with the smaller amplitude; small amplitude movements exhibited a phase lead, despite being a follower in coupling terms. These results suggest leader-follower coupling is a general property of social motor coordination. Predicting when such coupling effects occur is emphasized by the stability reducing effects of coordinating asymmetric components. Generally, the implication is role-taking is an emergent strategy of dividing up coordination stabilizing efforts unequally between actors (or limbs).
ContributorsFine, Justin (Author) / Amazeen, Eric L. (Thesis advisor) / Amazeen, Polemnia G. (Committee member) / Brewer, Gene (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2015
155960-Thumbnail Image.png
Description
The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand

The human hand is a complex biological system. Humans have evolved a unique ability to use the hand for a wide range of tasks, including activities of daily living such as successfully grasping and manipulating objects, i.e., lifting a cup of coffee without spilling. Despite the ubiquitous nature of hand use in everyday activities involving object manipulations, there is currently an incomplete understanding of the cortical sensorimotor mechanisms underlying this important behavior. One critical aspect of natural object grasping is the coordination of where the fingers make contact with an object and how much force is applied following contact. Such force-to-position modulation is critical for successful manipulation. However, the neural mechanisms underlying these motor processes remain less understood, as previous experiments have utilized protocols with fixed contact points which likely rely on different neural mechanisms from those involved in grasping at unconstrained contacts. To address this gap in the motor neuroscience field, transcranial magnetic stimulation (TMS) and electroencephalography (EEG) were used to investigate the role of primary motor cortex (M1), as well as other important cortical regions in the grasping network, during the planning and execution of object grasping and manipulation. The results of virtual lesions induced by TMS and EEG revealed grasp context-specific cortical mechanisms underlying digit force-to-position coordination, as well as the spatial and temporal dynamics of cortical activity during planning and execution. Together, the present findings provide the foundation for a novel framework accounting for how the central nervous system controls dexterous manipulation. This new knowledge can potentially benefit research in neuroprosthetics and improve the efficacy of neurorehabilitation techniques for patients affected by sensorimotor impairments.
ContributorsMcGurrin, Patrick M (Author) / Santello, Marco (Thesis advisor) / Helms-Tillery, Steve (Committee member) / Kleim, Jeff (Committee member) / Davare, Marco (Committee member) / Arizona State University (Publisher)
Created2017
155964-Thumbnail Image.png
Description
Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis

Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis users and non-amputee control subjects. This lack of sensitivity limits the ability of clinicians to make informed clinical decisions and presents challenges with insurance reimbursement for comprehensive clinical care and advanced prosthetic devices. These issues have directly impacted clinical care by restricting device options, increasing financial burden on clinics, and limiting support for research and development. This work aims to establish experimental methods and outcome measures that are more sensitive than traditional methods to balance and mobility changes in prosthesis users. Methods and analysis techniques were developed to probe aspects of balance and mobility control that may be specifically impacted by use of a prosthesis and present challenges similar to those experienced in daily life that could improve the detection of balance and mobility changes. Using the framework of cognitive resource allocation and dual-tasking, this work identified unique characteristics of prosthesis users’ postural control and developed sensitive measures of gait variability. The results also provide broader insight into dual-task analysis and the motor-cognitive response to demanding conditions. Specifically, this work identified altered motor behavior in prosthesis users and high cognitive demand of using a prosthesis. The residual standard deviation method was developed and demonstrated to be more effective than traditional gait variability measures at detecting the impact of dual-tasking. Additionally, spectral analysis of the center of pressure while standing identified altered somatosensory control in prosthesis users. These findings provide a new understanding of prosthetic use and new, highly sensitive techniques to assess balance and mobility in prosthesis users.
ContributorsHoward, Charla Lindley (Author) / Abbas, James (Thesis advisor) / Buneo, Christopher (Committee member) / Lynskey, Jim (Committee member) / Santello, Marco (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156093-Thumbnail Image.png
Description
Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings

Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance.

In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL.

Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well.

In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.
ContributorsVanGilder, Paul (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Muthuswamy, Jit (Committee member) / Foldes, Stephen (Committee member) / Arizona State University (Publisher)
Created2017
155910-Thumbnail Image.png
Description
The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward

The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward moving robotics into applications in unstructured environments. When humans cooperate with each other, often there are leader and follower roles. These roles may change during the task. This creates a need for the robotic system to be able to exchange roles with the human during a cooperative task. The unstructured nature of the new applications in the field creates a need for robotic systems to be able to interact in six degrees of freedom (DOF). Moreover, in these unstructured environments, the robotic system will have incomplete information. This means that it will sometimes perform an incorrect action and control methods need to be able to correct for this. However, the most compelling applications for robotics are where they have capabilities that the human does not, which also creates the need for robotic systems to be able to correct human action when it detects an error. Activity in the brain precedes human action. Utilizing this activity in the brain can classify the type of interaction desired by the human. For this dissertation, the cooperation between humans and robots is improved in two main areas. First, the ability for electroencephalogram (EEG) to determine the desired cooperation role with a human is demonstrated with a correct classification rate of 65%. Second, a robotic controller is developed to allow the human and robot to cooperate in six DOF with asymmetric role exchange. This system allowed human-robot cooperation to perform a cooperative task at 100% correct rate. High, medium, and low levels of robotic automation are shown to affect performance, with the human making the greatest numbers of errors when the robotic system has a medium level of automation.
ContributorsWhitsell, Bryan Douglas (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Polygerinos, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
155935-Thumbnail Image.png
Description
Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to

Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to quantify the interaction between feedback- and feedforward control. This feature was applied on two grasp contexts: grasping the object at either (1) predetermined or (2) self-selected grasp locations (“constrained” and “unconstrained”, respectively), where unconstrained grasping is thought to involve feedback-driven force corrections to a greater extent than constrained grasping. This proposition was confirmed by force feature analysis. The second aim of this dissertation was to quantify whether force control mechanisms differ between dominant and non-dominant hands. The force feature analysis demonstrated that manipulation by the dominant hand relies on feedforward control more than the non-dominant hand. The third aim was to quantify coordination mechanisms underlying physical interaction by dyads in object manipulation. The results revealed that only individuals with worse solo performance benefit from interpersonal coordination through physical couplings, whereas the better individuals do not. This work showed that naturally emerging leader-follower roles, whereby the leader in dyadic manipulation exhibits significant greater force changes than the follower. Furthermore, brain activity measured through electroencephalography (EEG) could discriminate leader and follower roles as indicated power modulation in the alpha frequency band over centro-parietal areas. Lastly, this dissertation suggested that the relation between force and motion (arm impedance) could be an important means for communicating intended movement direction between biological agents.
ContributorsMojtahedi, Keivan (Author) / Santello, Marco (Thesis advisor) / Greger, Bradley (Committee member) / Artemiadis, Panagiotis (Committee member) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2017