Matching Items (6)
Filtering by

Clear all filters

152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152349-Thumbnail Image.png
Description
As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem

As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations.
ContributorsWang, Yuting (Author) / Artemiadis, Panagiotis (Thesis advisor) / Mignolet, Marc (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
134817-Thumbnail Image.png
Description
For the past two decades, advanced Limb Gait Simulators and Exoskeletons have been developed to improve walking rehabilitation. A Limb Gait Simulator is used to analyze the human step cycle and/or assist a user walking on a treadmill. Most modern limb gait simulators, such as ALEX, have proven themselves effective

For the past two decades, advanced Limb Gait Simulators and Exoskeletons have been developed to improve walking rehabilitation. A Limb Gait Simulator is used to analyze the human step cycle and/or assist a user walking on a treadmill. Most modern limb gait simulators, such as ALEX, have proven themselves effective and reliable through their usage of motors, springs, cables, elastics, pneumatics and reaction loads. These mechanisms apply internal forces and reaction loads to the body. On the other hand, external forces are those caused by an external agent outside the system such as air, water, or magnets. A design for an exoskeleton using external forces has seldom been attempted by researchers. This thesis project focuses on the development of a Limb Gait Simulator based on a Pure External Force and has proven its effectiveness in generating torque on the human leg. The external force is generated through air propulsion using an Electric Ducted Fan (EDF) motor. Such a motor is typically used for remote control airplanes, but their applications can go beyond this. The objective of this research is to generate torque on the human leg through the control of the EDF engines thrust and the opening/closing of the reverse thruster flaps. This device qualifies as "assist as needed"; the user is entirely in control of how much assistance he or she may want. Static thrust values for the EDF engine are recorded using a thrust test stand. The product of the thrust (N) and the distance on the thigh (m) is the resulting torque. With the motor running at maximum RPM, the highest torque value reached was that of 3.93 (Nm). The motor EDF motor is powered by a 6S 5000 mAh LiPo battery. This torque value could be increased with the usage of a second battery connected in series, but this comes at a price. The designed limb gait simulator demonstrates that external forces, such as air, could have potential in the development of future rehabilitation devices.
ContributorsToulouse, Tanguy Nathan (Author) / Sugar, Thomas (Thesis director) / Artemiadis, Panagiotis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155722-Thumbnail Image.png
Description
A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion

A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion of the required

task. At the same time, human intuition and cognition can prove very useful in

extreme situations where a fast and reliable solution is needed. This idea led to the

creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate

the human element into the control of robotic swarms for increased robustness and

reliability. The aim of the present work is to extend the current state-of-the-art in HSI

by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),

which has proven to be very useful for people with motor disabilities. At first, a

preliminary investigation about the connection of brain activity and the observation

of swarm collective behaviors is conducted. After showing that such a connection

may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.

The system is based on the combination of motor imagery and the input from a game

controller, while its feasibility is proven through an extensive experimental process.

Finally, speech imagery is proposed as an alternative mental task for BCI applications.

This is done through a series of rigorous experiments and appropriate data analysis.

This work suggests that the integration of BCI principles in HSI applications can be

successful and it can potentially lead to systems that are more intuitive for the users

than the current state-of-the-art. At the same time, it motivates further research in

the area and sets the stepping stones for the potential development of the field of

Brain-Swarm Interfaces (BSI).
ContributorsKaravas, Georgios Konstantinos (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring M. (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2017
137772-Thumbnail Image.png
Description
As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.
ContributorsBevilacqua, Vincent Frank (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Trimble, Steven (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05