Matching Items (109)
Filtering by

Clear all filters

128895-Thumbnail Image.png
Description

Tactile perception is typically considered the result of cortical interpretation of afferent signals from a network of mechanical sensors underneath the skin. Yet, tactile illusion studies suggest that tactile perception can be elicited without afferent signals from mechanoceptors. Therefore, the extent that tactile perception arises from isomorphic mapping of tactile

Tactile perception is typically considered the result of cortical interpretation of afferent signals from a network of mechanical sensors underneath the skin. Yet, tactile illusion studies suggest that tactile perception can be elicited without afferent signals from mechanoceptors. Therefore, the extent that tactile perception arises from isomorphic mapping of tactile afferents onto the somatosensory cortex remains controversial. We tested whether isomorphic mapping of tactile afferent fibers onto the cortex leads directly to tactile perception by examining whether it is independent from proprioceptive input by evaluating the impact of different hand postures on the perception of a tactile illusion across fingertips. Using the Cutaneous Rabbit Effect, a well studied illusion evoking the perception that a stimulus occurs at a location where none has been delivered, we found that hand posture has a significant effect on the perception of the illusion across the fingertips. This finding emphasizes that tactile perception arises from integration of perceived mechanical and proprioceptive input and not purely from tactile interaction with the external environment.

ContributorsWarren, Jay (Author) / Santello, Marco (Author) / Helms Tillery, Stephen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2011-03-25
128891-Thumbnail Image.png
Description

Anticipatory force planning during grasping is based on visual cues about the object’s physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not

Anticipatory force planning during grasping is based on visual cues about the object’s physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object’s center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object’s center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object’s CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

ContributorsCraje, Celine (Author) / Santello, Marco (Author) / Gordon, Andrew M. (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2013-10-16
128709-Thumbnail Image.png
Description

Pure coconut oil, lanolin, and acetaminophen were vaporized at rates of 1–50 mg/min, using a porous network exhibiting a temperature gradient from 5000 to 5500 K/mm, without incurring noticeable chemical changes due to combustion, oxidation, or other thermally-induced chemical structural changes. The newly coined term “ereptiospiration” is used here to

Pure coconut oil, lanolin, and acetaminophen were vaporized at rates of 1–50 mg/min, using a porous network exhibiting a temperature gradient from 5000 to 5500 K/mm, without incurring noticeable chemical changes due to combustion, oxidation, or other thermally-induced chemical structural changes. The newly coined term “ereptiospiration” is used here to describe this combination of thermal transpiration at high temperature gradients since the process can force the creation of thermal aerosols by rapid heating in a localized zone. Experimental data were generated for these materials using two different supports for metering the materials to the battery powered coil: namely, a stainless steel fiber bundle and a 3-D printed steel cartridge. Heating coconut oil, lanolin, or acetaminophen in a beaker to lower temperatures than those achieved at the surface of the coil showed noticeable and rapid degradation in the samples, while visual and olfactory observations for ereptiospiration showed no noticeable degradation in lanolin and coconut oil while HPLC chromatograms along with visual observation confirm that within the limit of detection, acetaminophen remains chemically unaltered by ereptiospiration.

ContributorsWoolley, Christine (Author) / Garcia, Antonio (Author) / Santello, Marco (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-12
129361-Thumbnail Image.png
Description

Sensorimotor control theories propose that the central nervous system exploits expected sensory consequences generated by motor commands for movement planning, as well as online sensory feedback for comparison with expected sensory feedback for monitoring and correcting, if needed, ongoing motor output. In our study, we tested this theoretical framework by

Sensorimotor control theories propose that the central nervous system exploits expected sensory consequences generated by motor commands for movement planning, as well as online sensory feedback for comparison with expected sensory feedback for monitoring and correcting, if needed, ongoing motor output. In our study, we tested this theoretical framework by quantifying the functional role of expected vs. actual proprioceptive feedback for planning and regulation of gait in humans. We addressed this question by using a novel methodological approach to deliver fast perturbations of the walking surface stiffness, in conjunction with a virtual reality system that provided visual feedback of upcoming changes of surface stiffness. In the “predictable” experimental condition, we asked subjects to learn associating visual feedback of changes in floor stiffness (sand patch) during locomotion to quantify kinematic and kinetic changes in gait prior to and during the gait cycle. In the “unpredictable” experimental condition, we perturbed floor stiffness at unpredictable instances during the gait to characterize the gait-phase dependent strategies in recovering the locomotor cycle. For the “unpredictable” conditions, visual feedback of changes in floor stiffness was absent or inconsistent with tactile and proprioceptive feedback. The investigation of these perturbation-induced effects on contralateral leg kinematics revealed that visual feedback of upcoming changes in floor stiffness allows for both early (preparatory) and late (post-perturbation) changes in leg kinematics. However, when proprioceptive feedback is not available, the early responses in leg kinematics do not occur while the late responses are preserved although in a, slightly attenuated form. The methods proposed in this study and the preliminary results of the kinematic response of the contralateral leg open new directions for the investigation of the relative role of visual, tactile, and proprioceptive feedback on gait control, with potential implications for designing novel robot-assisted gait rehabilitation approaches.

ContributorsFrost, Ryan (Author) / Skidmore, Jeffrey (Author) / Santello, Marco (Author) / Artemiadis, Panagiotis (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-02-09
129470-Thumbnail Image.png
Description

Recent studies about sensorimotor control of the human hand have focused on how dexterous manipulation is learned and generalized. Here we address this question by testing the extent to which learned manipulation can be transferred when the contralateral hand is used and/or object orientation is reversed. We asked subjects to

Recent studies about sensorimotor control of the human hand have focused on how dexterous manipulation is learned and generalized. Here we address this question by testing the extent to which learned manipulation can be transferred when the contralateral hand is used and/or object orientation is reversed. We asked subjects to use a precision grip to lift a grip device with an asymmetrical mass distribution while minimizing object roll during lifting by generating a compensatory torque. Subjects were allowed to grasp anywhere on the object’s vertical surfaces, and were therefore able to modulate both digit positions and forces. After every block of eight trials performed in one manipulation context (i.e., using the right hand and at a given object orientation), subjects had to lift the same object in the second context for one trial (transfer trial).

Context changes were made by asking subjects to switch the hand used to lift the object and/or rotate the object 180° about a vertical axis. Therefore, three transfer conditions, hand switch (HS), object rotation (OR), and both hand switch and object rotation (HS+OR), were tested and compared with hand matched control groups who did not experience context changes. We found that subjects in all transfer conditions adapted digit positions across multiple transfer trials similar to the learning of control groups, regardless of different changes of contexts. Moreover, subjects in both HS and HS+OR group also adapted digit forces similar to the control group, suggesting independent learning of the left hand. In contrast, the OR group showed significant negative transfer of the compensatory torque due to an inability to adapt digit forces. Our results indicate that internal representations of dexterous manipulation tasks may be primarily built through the hand used for learning and cannot be transferred across hands.

ContributorsFu, Qiushi (Author) / Choi, Jason (Author) / Gordon, Andrew M. (Author) / Jesunathadas, Mark (Author) / Santello, Marco (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-09-18
127980-Thumbnail Image.png
Description

Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. In doing that, a vast range of kinematic strategies can be observed. However, in this work we formulate the hypothesis that such ability can be described in terms

Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. In doing that, a vast range of kinematic strategies can be observed. However, in this work we formulate the hypothesis that such ability can be described in terms of a synergistic behavior in the generation of hand postures, i.e., using a reduced set of commonly used kinematic patterns. This is in analogy with previous studies showing the presence of such behavior in different tasks, such as grasping. We investigated this hypothesis in experiments performed by six subjects, who were asked to grasp objects from a flat surface. We quantitatively characterized hand posture behavior from a kinematic perspective, i.e., the hand joint angles, in both pre-shaping and during the interaction with the environment. To determine the role of tactile feedback, we repeated the same experiments but with subjects wearing a rigid shell on the fingertips to reduce cutaneous afferent inputs. Results show the persistence of at least two postural synergies in all the considered experimental conditions and phases. Tactile impairment does not alter significantly the first two synergies, and contact with the environment generates a change only for higher order Principal Components. A good match also arises between the first synergy found in our analysis and the first synergy of grasping as quantified by previous work. The present study is motivated by the interest of learning from the human example, extracting lessons that can be applied in robot design and control. Thus, we conclude with a discussion on implications for robotics of our findings.

ContributorsDella Santina, Cosimo (Author) / Bianchi, Matteo (Author) / Averta, Giuseppe (Author) / Ciotti, Simone (Author) / Arapi, Visar (Author) / Fani, Simone (Author) / Battaglia, Edoardo (Author) / Giuseppe Catalano, Manuel (Author) / Santello, Marco (Author) / Bicchi, Antonio (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-08-29
128289-Thumbnail Image.png
Description

Theoretical perspectives on anticipatory planning of object manipulation have traditionally been informed by studies that have investigated kinematics (hand shaping and digit position) and kinetics (forces) in isolation. This poses limitations on our understanding of the integration of such domains, which have recently been shown to be strongly interdependent. Specifically,

Theoretical perspectives on anticipatory planning of object manipulation have traditionally been informed by studies that have investigated kinematics (hand shaping and digit position) and kinetics (forces) in isolation. This poses limitations on our understanding of the integration of such domains, which have recently been shown to be strongly interdependent. Specifically, recent studies revealed strong covariation of digit position and load force during the loading phase of two-digit grasping. Here, we determined whether such digit force-position covariation is a general feature of grasping. We investigated the coordination of digit position and forces during five-digit whole-hand manipulation of an object with a variable mass distribution. Subjects were instructed to prevent object roll during the lift. As found in precision grasping, there was strong trial-to-trial covariation of digit position and force. This suggests that the natural variation of digit position that is compensated for by trial-to-trial variation in digit forces is a fundamental feature of grasp control, and not only specific to precision grasp. However, a main difference with precision grasping was that modulation of digit position to the object’s mass distribution was driven predominantly by the thumb, with little to no modulation of finger position. Modulation of thumb position rather than fingers is likely due to its greater range of motion and therefore adaptability to object properties. Our results underscore the flexibility of the central nervous system in implementing a range of solutions along the digit force-to-position continuum for dexterous manipulation.

ContributorsMarneweck, Michelle (Author) / Lee-Miller, Trevor (Author) / Santello, Marco (Author) / Gordon, Andrew M. (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-09-15
128364-Thumbnail Image.png
Description

Of particular interest to the neuroscience and robotics communities is the understanding of how two humans could physically collaborate to perform motor tasks such as holding a tool or moving it across locations. When two humans physically interact with each other, sensory consequences and motor outcomes are not entirely predictable

Of particular interest to the neuroscience and robotics communities is the understanding of how two humans could physically collaborate to perform motor tasks such as holding a tool or moving it across locations. When two humans physically interact with each other, sensory consequences and motor outcomes are not entirely predictable as they also depend on the other agent’s actions. The sensory mechanisms involved in physical interactions are not well understood. The present study was designed (1) to quantify human–human physical interactions where one agent (“follower”) has to infer the intended or imagined—but not executed—direction of motion of another agent (“leader”) and (2) to reveal the underlying strategies used by the dyad. This study also aimed at verifying the extent to which visual feedback (VF) is necessary for communicating intended movement direction. We found that the control of leader on the relationship between force and motion was a critical factor in conveying his/her intended movement direction to the follower regardless of VF of the grasped handle or the arms. Interestingly, the dyad’s ability to communicate and infer movement direction with significant accuracy improved (>83%) after a relatively short amount of practice. These results indicate that the relationship between force and motion (interpreting as arm impedance modulation) may represent an important means for communicating intended movement direction between biological agents, as indicated by the modulation of this relationship to intended direction. Ongoing work is investigating the application of the present findings to optimize communication of high-level movement goals during physical interactions between biological and non-biological agents.

ContributorsMojtahedi, Keivan (Author) / Whitsell, Bryan (Author) / Artemiadis, Panagiotis (Author) / Santello, Marco (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-13
128226-Thumbnail Image.png
Description

The human hand has so many degrees of freedom that it may seem impossible to control. A potential solution to this problem is “synergy control” which combines dimensionality reduction with great flexibility. With applicability to a wide range of tasks, this has become a very popular concept. In this review,

The human hand has so many degrees of freedom that it may seem impossible to control. A potential solution to this problem is “synergy control” which combines dimensionality reduction with great flexibility. With applicability to a wide range of tasks, this has become a very popular concept. In this review, we describe the evolution of the modern concept using studies of kinematic and force synergies in human hand control, neurophysiology of cortical and spinal neurons, and electromyographic (EMG) activity of hand muscles. We go beyond the often purely descriptive usage of synergy by reviewing the organization of the underlying neuronal circuitry in order to propose mechanistic explanations for various observed synergy phenomena. Finally, we propose a theoretical framework to reconcile important and still debated concepts such as the definitions of “fixed” vs. “flexible” synergies and mechanisms underlying the combination of synergies for hand control.

ContributorsSantello, Marco (Author) / Baud-Bovy, Gabriel (Author) / Jorntell, Henrik (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2013-04-08