Matching Items (11)
151930-Thumbnail Image.png
Description
Incidental learning of sequential information occurs in visual, auditory and tactile domains. It occurs throughout our lifetime and even in nonhuman species. It is likely to be one of the most important foundations for the development of normal learning. To date, there is no agreement as to how incidental learning

Incidental learning of sequential information occurs in visual, auditory and tactile domains. It occurs throughout our lifetime and even in nonhuman species. It is likely to be one of the most important foundations for the development of normal learning. To date, there is no agreement as to how incidental learning occurs. The goal of the present set of experiments is to determine if visual sequential information is learned in terms of abstract rules or stimulus-specific details. Two experiments test the extent to which interaction with the stimuli can influence the information that is encoded by the learner. The results of both experiments support the claim that stimulus and domain specific details directly shape what is learned, through a process of tuning the neuromuscular systems involved in the interaction between the learner and the materials.
ContributorsMarsh, Elizabeth R (Author) / Glenberg, Arthur M. (Thesis advisor) / Amazeen, Eric (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152881-Thumbnail Image.png
Description
Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense

Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense of digit position with motor commands responsible for generating digit forces and placement. However, the mechanisms underlying the phenomenon of digit position-force coordination are not well understood. This dissertation addresses this question through three experiments that are based on psychophysics and object lifting tasks. It was found in psychophysics tasks that sensed relative digit position was accurately reproduced when sensorimotor transformations occurred with larger vertical fingertip separations, within the same hand, and at the same hand posture. The results from a follow-up experiment conducted in the same digit position-matching task while generating forces in different directions reveal a biased relative digit position toward the direction of force production. Specifically, subjects reproduced the thumb CoP higher than the index finger CoP when vertical digit forces were directed upward and downward, respectively, and vice versa. It was also found in lifting tasks that the ability to discriminate the relative digit position prior to lifting an object and modulate digit forces to minimize object roll as a function of digit position are robust regardless of whether motor commands for positioning the digits on the object are involved. These results indicate that the erroneous sensorimotor transformations of relative digit position reported here must be compensated during dexterous manipulation by other mechanisms, e.g., visual feedback of fingertip position. Furthermore, predicted sensory consequences derived from the efference copy of voluntary motor commands to generate vertical digit forces may override haptic sensory feedback for the estimation of relative digit position. Lastly, the sensorimotor transformations from haptic feedback to digit force modulation to position appear to be facilitated by motor commands for active digit placement in manipulation.
ContributorsShibata, Daisuke (Author) / Santello, Marco (Thesis advisor) / Dounskaia, Natalia (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / McBeath, Michael (Committee member) / Arizona State University (Publisher)
Created2014
153500-Thumbnail Image.png
Description
Parkinson's disease (PD) is a neurodegenerative disorder that produces a characteristic set of neuromotor deficits that sometimes includes reduced amplitude and velocity of movement. Several studies have shown that people with PD improved their motor performance when presented with external cues. Other work has demonstrated that high velocity

Parkinson's disease (PD) is a neurodegenerative disorder that produces a characteristic set of neuromotor deficits that sometimes includes reduced amplitude and velocity of movement. Several studies have shown that people with PD improved their motor performance when presented with external cues. Other work has demonstrated that high velocity and large amplitude exercises can increase the amplitude and velocity of movement in simple carryover tasks in the upper and lower extremities. Although the cause for these effects is not known, improvements due to cueing suggest that part of the neuromotor deficit in PD is in the integration of sensory feedback to produce motor commands. Previous studies have documented some somatosensory deficits, but only limited information is available regarding the nature and magnitude of sensorimotor deficits in the shoulder of people with PD. The goals of this research were to characterize the sensorimotor impairment in the shoulder joint of people with PD and to investigate the use of visual feedback and large amplitude/high velocity exercises to target PD-related motor deficits. Two systems were designed and developed to use visual feedback to assess the ability of participants to accurately adjust limb placement or limb movement velocity and to encourage improvements in performance of these tasks. Each system was tested on participants with PD, age-matched control subjects and young control subjects to characterize and compare limb placement and velocity control capabilities. Results demonstrated that participants with PD were less accurate at placing their limbs than age-matched or young control subjects, but that their performance improved over the course of the test session such that by the end, the participants with PD performed as well as controls. For the limb velocity feedback task, participants with PD and age-matched control subjects were less accurate than young control subjects, but at the end of the session, participants with PD and age-matched control subjects were as accurate as the young control subjects. This study demonstrates that people with PD were able to improve their movement patterns based on visual feedback of performance and suggests that this feedback paradigm may be useful in exercise programs for people with PD.
ContributorsSmith, Catherine (Author) / Abbas, James J (Thesis advisor) / Ingalls, Todd (Thesis advisor) / Krishnamurthi, Narayanan (Committee member) / Buneo, Christopher (Committee member) / Rikakis, Thanassis (Committee member) / Arizona State University (Publisher)
Created2015
150297-Thumbnail Image.png
Description
Anticipatory planning of digit positions and forces is critical for successful dexterous object manipulation. Anticipatory (feedforward) planning bypasses the inherent delays in reflex responses and sensorimotor integration associated with reactive (feedback) control. It has been suggested that feedforward and feedback strategies can be distinguished based on the profile of gri

Anticipatory planning of digit positions and forces is critical for successful dexterous object manipulation. Anticipatory (feedforward) planning bypasses the inherent delays in reflex responses and sensorimotor integration associated with reactive (feedback) control. It has been suggested that feedforward and feedback strategies can be distinguished based on the profile of grip and load force rates during the period between initial contact with the object and object lift. However, this has not been validated in tasks that do not constrain digit placement. The purposes of this thesis were (1) to validate the hypothesis that force rate profiles are indicative of the control strategy used for object manipulation and (2) to test this hypothesis by comparing manipulation tasks performed with and without digit placement constraints. The first objective comprised two studies. In the first study an additional light or heavy mass was added to the base of the object. In the second study a mass was added, altering the object's center of mass (CM) location. In each experiment digit force rates were calculated between the times of initial digit contact and object lift. Digit force rates were fit to a Gaussian bell curve and the goodness of fit was compared across predictable and unpredictable mass and CM conditions. For both experiments, a predictable object mass and CM elicited bell shaped force rate profiles, indicative of feedforward control. For the second objective, a comparison of performance between subjects who performed the grasp task with either constrained or unconstrained digit contact locations was conducted. When digit location was unconstrained and CM was predictable, force rates were well fit to a bell shaped curve. However, the goodness of fit of the force rate profiles to the bell shaped curve was weaker for the constrained than the unconstrained digit placement condition. These findings seem to indicate that brain can generate an appropriate feedforward control strategy even when digit placement is unconstrained and an infinite combination of digit placement and force solutions exists to lift the object successfully. Future work is needed that investigates the role digit positioning and tactile feedback has on anticipatory control of object manipulation.
ContributorsCooperhouse, Michael A (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2011
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
156093-Thumbnail Image.png
Description
Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings

Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance.

In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL.

Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well.

In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.
ContributorsVanGilder, Paul (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Muthuswamy, Jit (Committee member) / Foldes, Stephen (Committee member) / Arizona State University (Publisher)
Created2017
154699-Thumbnail Image.png
Description
Unmanned aerial vehicles have received increased attention in the last decade due to their versatility, as well as the availability of inexpensive sensors (e.g. GPS, IMU) for their navigation and control. Multirotor vehicles, specifically quadrotors, have formed a fast growing field in robotics, with the range of applications spanning from

Unmanned aerial vehicles have received increased attention in the last decade due to their versatility, as well as the availability of inexpensive sensors (e.g. GPS, IMU) for their navigation and control. Multirotor vehicles, specifically quadrotors, have formed a fast growing field in robotics, with the range of applications spanning from surveil- lance and reconnaissance to agriculture and large area mapping. Although in most applications single quadrotors are used, there is an increasing interest in architectures controlling multiple quadrotors executing a collaborative task. This thesis introduces a new concept of control involving more than one quadrotors, according to which two quadrotors can be physically coupled in mid-flight. This concept equips the quadro- tors with new capabilities, e.g. increased payload or pursuit and capturing of other quadrotors. A comprehensive simulation of the approach is built to simulate coupled quadrotors. The dynamics and modeling of the coupled system is presented together with a discussion regarding the coupling mechanism, impact modeling and additional considerations that have been investigated. Simulation results are presented for cases of static coupling as well as enemy quadrotor pursuit and capture, together with an analysis of control methodology and gain tuning. Practical implementations are introduced as results show the feasibility of this design.
ContributorsLarsson, Daniel (Author) / Artemiadis, Panagiotis (Thesis advisor) / Marvi, Hamidreza (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2016
155315-Thumbnail Image.png
Description
In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in

In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in air only travels about 1100 feet per second, fans observing from several hundred feet away will receive auditory cues that are delayed a significant portion of a second, and thus conceivably could systematically differ in judgments compared to the nearby umpire. The current research examines two questions. 1. How reliably and with what biases do observers judge the order of visual versus auditory events? 2. Do observers making such order judgments from far away systematically compensate for delays due to the slow speed of sound? It is hypothesized that if any temporal bias occurs it is in the direction consistent with observers not accounting for the sound delay, such that increasing viewing distance will increase the bias to assume the sound occurred later. It was found that nearby observers are relatively accurate at judging if a sound occurred before or after a simple visual event (a flash), but exhibit a systematic bias to favor visual stimuli occurring first (by about 30 msec). In contrast, distant observers did not compensate for the delay of the speed of sound such that they systematically favored the visual cue occurring earlier as a function of viewing distance. When observers judged simple visual stimuli in motion relative to the same sound burst, the distance effect occurred as a function of the visual clarity of the ball arriving. In the baseball setting, using a large screen projection of baserunner, a diminished distance effect occurred due to the additional visual cues. In summary, observers generally do not account for the delay of sound due to distance.
ContributorsKrynen, R. Chandler (Author) / McBeath, Michael (Thesis advisor) / Homa, Donald (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017