Matching Items (125)
152071-Thumbnail Image.png
Description
The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into

The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into artificial hands in order to enhance grasp stability and reduce the cognitive burden on the user. To this end, three studies were conducted to understand how human hands respond, passively and actively, to unexpected perturbations of a grasped object along and about different axes relative to the hand. The first study investigated the effect of magnitude, direction, and axis of rotation on precision grip responses to unexpected rotational perturbations of a grasped object. A robust "catch-up response" (a rapid, pulse-like increase in grip force rate previously reported only for translational perturbations) was observed whose strength scaled with the axis of rotation. Using two haptic robots, we then investigated the effects of grip surface friction, axis, and direction of perturbation on precision grip responses for unexpected translational and rotational perturbations for three different hand-centric axes. A robust catch-up response was observed for all axes and directions for both translational and rotational perturbations. Grip surface friction had no effect on the stereotypical catch-up response. Finally, we characterized the passive properties of the precision grip-object system via robot-imposed impulse perturbations. The hand-centric axis associated with the greatest translational stiffness was different than that for rotational stiffness. This work expands our understanding of the passive and active features of precision grip, a hallmark of human dexterous manipulation. Biological insights such as these could be used to enhance the functionality of artificial hands and the quality of life for upper extremity amputees.
ContributorsDe Gregorio, Michael (Author) / Santos, Veronica J. (Thesis advisor) / Artemiadis, Panagiotis K. (Committee member) / Santello, Marco (Committee member) / Sugar, Thomas (Committee member) / Helms Tillery, Stephen I. (Committee member) / Arizona State University (Publisher)
Created2013
151742-Thumbnail Image.png
Description
This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We

This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users.
Contributorsda Silva, Flavio J. K (Author) / Mcbeath, Michael K (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Presson, Clark (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2013
151787-Thumbnail Image.png
Description
Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation

Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation data sets, while the parameters of the decoding function are specific for each subject. In this thesis we propose a new methodology that doesn't require training and is not user-specific. The main idea is to supplement the decoding functional error with the human ability to learn inverse model of an arbitrary mapping function. We have shown that the subjects gradually learned the control strategy and their learning rates improved. We also worked on identifying an optimized control scheme that would be even more effective and easy to learn for the subjects. Optimization was done by taking into account that muscles act in synergies while performing a motion task. The low-dimensional representation of the neural activity was used to control a two-dimensional task. Results showed that in the case of reduced dimensionality mapping, the subjects were able to learn to control the device in a slower pace, however they were able to reach and retain the same level of controllability. To summarize, we were able to build an EMG-based controller for robot devices that would work for any subject, without any training or decoding function, suggesting human-embedded controllers for robotic devices.
ContributorsAntuvan, Chris Wilson (Author) / Artemiadis, Panagiotis (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152349-Thumbnail Image.png
Description
As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem

As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations.
ContributorsWang, Yuting (Author) / Artemiadis, Panagiotis (Thesis advisor) / Mignolet, Marc (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
153533-Thumbnail Image.png
Description
As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and

As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and proven to be a potential method to fit some of these niche areas. There have been some difficulties in producing terrain following controllers that maintain robust, steady state, which are disturbance resistant.

The following thesis will discuss a controller which has shown the ability to produce these desired properties. A phase angle oscillator controller is shown to work remarkably well, both in simulation and with a one degree of freedom robotic test stand.

Work was also done with an experimental quadruped with less successful results, but which did show potential for stability. Additional work is suggested for the quadruped.
ContributorsNew, Philip Wesley (Author) / Sugar, Thomas G. (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2015
153189-Thumbnail Image.png
Description
Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user.

Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user. Many human tasks such as walking, running, and hopping are repeating or cyclic tasks and a robot can add energy in sync with the repeating pattern for assistance. A method has been developed to add energy at the appropriate time to the repeating limit cycle based on a phase oscillator. The phase oscillator eliminates time from the forcing function which is based purely on the motion of the user. This approach has been simulated, implemented and tested in a robotic backpack which facilitates carrying heavy loads. The device oscillates the load of the backpack, based on the motion of the user, in order to add energy at the correct time and thus reduce the amount of energy required for walking with a heavy load. Models were developed in Working Model 2-D, a dynamics simulation software, in conjunction with MATLAB to verify theory and test control methods. The control system developed is robust and has successfully operated on a range of different users, each with their own different and distinct gait. The results of experimental testing validated the corresponding models.
ContributorsWheeler, Chase (Author) / Sugar, Thomas G. (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2014
153240-Thumbnail Image.png
Description
Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with

Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with the human gait cycle. Applying supplemental external and internal forces to the human body during the gait cycle has been shown to decrease the metabolic cost for walking, allowing individuals to carry additional weight and walk further distances. Significant research has been conducted to reduce the metabolic cost of walking, however, there are few if any documented studies that focus specifically on reducing the metabolic cost associated with high speed running. Three mechanical systems were designed to work in concert with the human user to decrease metabolic cost and increase the range and speeds at which a human can run.

The methods of design require a focus on mathematical modeling, simulations, and metabolic cost. Mathematical modeling and simulations are used to aid in the design process of robotic systems and metabolic testing is regarded as the final analysis process to determine the true effectiveness of robotic prototypes. Metabolic data, (VO2) is the volumetric consumption of oxygen, per minute, per unit mass (ml/min/kg). Metabolic testing consists of analyzing the oxygen consumption of a test subject while performing a task naturally and then comparing that data with analyzed oxygen consumption of the same task while using an assistive device.

Three devices were designed and tested to augment high speed running. The first device, AirLegs V1, is a mostly aluminum exoskeleton with two pneumatic linear actuators connecting from the lower back directly to the user's thighs, allowing the device to induce a torque on the leg by pushing and pulling on the user's thigh during running. The device also makes use of two smaller pneumatic linear actuators which drive cables connecting to small lever arms at the back of the heel, inducing a torque at the ankles. Device two, AirLegs V2, is also pneumatically powered but is considered to be a soft suit version of the first device. It uses cables to interface the forces created by actuators located vertically on the user's back. These cables then connect to the back of the user's knees resulting in greater flexibility and range of motion of the legs. Device three, a Jet Pack, produces an external force against the user's torso to propel a user forward and upward making it easier to run. Third party testing, pilot demonstrations and timed trials have demonstrated that all three of the devices effectively reduce the metabolic cost of running below that of natural running with no device.
ContributorsKerestes, Jason (Author) / Sugar, Thomas (Thesis advisor) / Redkar, Sangram (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2014