Matching Items (5)
Filtering by

Clear all filters

152076-Thumbnail Image.png
Description
Human fingertips contain thousands of specialized mechanoreceptors that enable effortless physical interactions with the environment. Haptic perception capabilities enable grasp and manipulation in the absence of visual feedback, as when reaching into one's pocket or wrapping a belt around oneself. Unfortunately, state-of-the-art artificial tactile sensors and processing algorithms are no

Human fingertips contain thousands of specialized mechanoreceptors that enable effortless physical interactions with the environment. Haptic perception capabilities enable grasp and manipulation in the absence of visual feedback, as when reaching into one's pocket or wrapping a belt around oneself. Unfortunately, state-of-the-art artificial tactile sensors and processing algorithms are no match for their biological counterparts. Tactile sensors must not only meet stringent practical specifications for everyday use, but their signals must be processed and interpreted within hundreds of milliseconds. Control of artificial manipulators, ranging from prosthetic hands to bomb defusal robots, requires a constant reliance on visual feedback that is not entirely practical. To address this, we conducted three studies aimed at advancing artificial haptic intelligence. First, we developed a novel, robust, microfluidic tactile sensor skin capable of measuring normal forces on flat or curved surfaces, such as a fingertip. The sensor consists of microchannels in an elastomer filled with a liquid metal alloy. The fluid serves as both electrical interconnects and tunable capacitive sensing units, and enables functionality despite substantial deformation. The second study investigated the use of a commercially-available, multimodal tactile sensor (BioTac sensor, SynTouch) to characterize edge orientation with respect to a body fixed reference frame, such as a fingertip. Trained on data from a robot testbed, a support vector regression model was developed to relate haptic exploration actions to perception of edge orientation. The model performed comparably to humans for estimating edge orientation. Finally, the robot testbed was used to perceive small, finger-sized geometric features. The efficiency and accuracy of different haptic exploratory procedures and supervised learning models were assessed for estimating feature properties such as type (bump, pit), order of curvature (flat, conical, spherical), and size. This study highlights the importance of tactile sensing in situations where other modalities fail, such as when the finger itself blocks line of sight. Insights from this work could be used to advance tactile sensor technology and haptic intelligence for artificial manipulators that improve quality of life, such as prosthetic hands and wheelchair-mounted robotic hands.
ContributorsPonce Wong, Ruben Dario (Author) / Santos, Veronica J (Thesis advisor) / Artemiadis, Panagiotis K (Committee member) / Helms Tillery, Stephen I (Committee member) / Posner, Jonathan D (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2013
150144-Thumbnail Image.png
Description
In the past decade, research on the motor control side of neuroprosthetics has steadily gained momentum. However, modern research in prosthetic development supplements a focus on motor control with a concentration on sensory feedback. Simulating sensation is a central issue because without sensory capabilities, the sophistication of the most advanced

In the past decade, research on the motor control side of neuroprosthetics has steadily gained momentum. However, modern research in prosthetic development supplements a focus on motor control with a concentration on sensory feedback. Simulating sensation is a central issue because without sensory capabilities, the sophistication of the most advanced motor control system fails to reach its full potential. This research is an effort toward the development of sensory feedback specifically for neuroprosthetic hands. The present aim of this work is to understand the processing and representation of cutaneous sensation by evaluating performance and neural activity in somatosensory cortex (SI) during a grasp task. A non-human primate (Macaca mulatta) was trained to reach out and grasp textured instrumented objects with a precision grip. Two different textures for the objects were used, 100% cotton cloth and 60-grade sandpaper, and the target object was presented at two different orientations. Of the 167 cells that were isolated for this experiment, only 42 were recorded while the subject executed a few blocks of successful trials for both textures. These latter cells were used in this study's statistical analysis. Of these, 37 units (88%) exhibited statistically significant task related activity. Twenty-two units (52%) exhibited statistically significant tuning to texture, and 16 units (38%) exhibited statistically significant tuning to posture. Ten of the cells (24%) exhibited statistically significant tuning to both texture and posture. These data suggest that single units in somatosensory cortex can encode multiple phenomena such as texture and posture. However, if this information is to be used to provide sensory feedback for a prosthesis, scientists must learn to further parse cortical activity to discover how to induce specific modalities of sensation. Future experiments should therefore be developed that probe more variables and that more systematically and comprehensively scan somatosensory cortex. This will allow researchers to seek out the existence or non-existence of cortical pockets reserved for certain modalities of sensation, which will be valuable in learning how to later provide appropriate sensory feedback for a prosthesis through cortical stimulation.
ContributorsNaufel, Stephanie (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica J (Thesis advisor) / Buneo, Christopher A (Committee member) / Robert, Jason S (Committee member) / Arizona State University (Publisher)
Created2011
154148-Thumbnail Image.png
Description
Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this

Brain-machine interfaces (BMIs) were first imagined as a technology that would allow subjects to have direct communication with prosthetics and external devices (e.g. control over a computer cursor or robotic arm movement). Operation of these devices was not automatic, and subjects needed calibration and training in order to master this control. In short, learning became a key component in controlling these systems. As a result, BMIs have become ideal tools to probe and explore brain activity, since they allow the isolation of neural inputs and systematic altering of the relationships between the neural signals and output. I have used BMIs to explore the process of brain adaptability in a motor-like task. To this end, I trained non-human primates to control a 3D cursor and adapt to two different perturbations: a visuomotor rotation, uniform across the neural ensemble, and a decorrelation task, which non-uniformly altered the relationship between the activity of particular neurons in an ensemble and movement output. I measured individual and population level changes in the neural ensemble as subjects honed their skills over the span of several days. I found some similarities in the adaptation process elicited by these two tasks. On one hand, individual neurons displayed tuning changes across the entire ensemble after task adaptation: most neurons displayed transient changes in their preferred directions, and most neuron pairs showed changes in their cross-correlations during the learning process. On the other hand, I also measured population level adaptation in the neural ensemble: the underlying neural manifolds that control these neural signals also had dynamic changes during adaptation. I have found that the neural circuits seem to apply an exploratory strategy when adapting to new tasks. Our results suggest that information and trajectories in the neural space increase after initially introducing the perturbations, and before the subject settles into workable solutions. These results provide new insights into both the underlying population level processes in motor learning, and the changes in neural coding which are necessary for subjects to learn to control neuroprosthetics. Understanding of these mechanisms can help us create better control algorithms, and design training paradigms that will take advantage of these processes.
ContributorsArmenta Salas, Michelle (Author) / Helms Tillery, Stephen I (Thesis advisor) / Si, Jennie (Committee member) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2015
154883-Thumbnail Image.png
Description
Robotic systems are outmatched by the abilities of the human hand to perceive and manipulate the world. Human hands are able to physically interact with the world to perceive, learn, and act to accomplish tasks. Limitations of robotic systems to interact with and manipulate the world diminish their usefulness. In

Robotic systems are outmatched by the abilities of the human hand to perceive and manipulate the world. Human hands are able to physically interact with the world to perceive, learn, and act to accomplish tasks. Limitations of robotic systems to interact with and manipulate the world diminish their usefulness. In order to advance robot end effectors, specifically artificial hands, rich multimodal tactile sensing is needed. In this work, a multi-articulating, anthropomorphic robot testbed was developed for investigating tactile sensory stimuli during finger-object interactions. The artificial finger is controlled by a tendon-driven remote actuation system that allows for modular control of any tendon-driven end effector and capabilities for both speed and strength. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. Next, attention was focused on real-time artificial perception for decision-making. A robotic system needs to perceive its environment in order to make decisions. Specific actions such as “exploratory procedures” can be employed to classify and characterize object features. Prior work on offline perception was extended to develop an anytime predictive model that returns the probability of having touched a specific feature of an object based on minimally processed sensor data. Developing models for anytime classification of features facilitates real-time action-perception loops. Finally, by combining real-time action-perception with reinforcement learning, a policy was learned to complete a functional contour-following task: closing a deformable ziplock bag. The approach relies only on proprioceptive and localized tactile data. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards within a finite time period by balancing exploration versus exploitation of the action space. Performance of the C-MAB learner was compared to a benchmark Q-learner that eventually returns the optimal policy. To assess robustness and generalizability, the learned policy was tested on variations of the original contour-following task. The work presented contributes to the full range of tools necessary to advance the abilities of artificial hands with respect to dexterity, perception, decision-making, and learning.
ContributorsHellman, Randall Blake (Author) / Santos, Veronica J (Thesis advisor) / Artemiadis, Panagiotis K (Committee member) / Berman, Spring (Committee member) / Helms Tillery, Stephen I (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2016
151390-Thumbnail Image.png
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space.

Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
ContributorsRincon Gonzalez, Liliana (Author) / Helms Tillery, Stephen I (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012