Matching Items (12)
Filtering by

Clear all filters

152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152349-Thumbnail Image.png
Description
As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem

As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations.
ContributorsWang, Yuting (Author) / Artemiadis, Panagiotis (Thesis advisor) / Mignolet, Marc (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
155964-Thumbnail Image.png
Description
Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis

Lower-limb prosthesis users have commonly-recognized deficits in gait and posture control. However, existing methods in balance and mobility analysis fail to provide sufficient sensitivity to detect changes in prosthesis users' postural control and mobility in response to clinical intervention or experimental manipulations and often fail to detect differences between prosthesis users and non-amputee control subjects. This lack of sensitivity limits the ability of clinicians to make informed clinical decisions and presents challenges with insurance reimbursement for comprehensive clinical care and advanced prosthetic devices. These issues have directly impacted clinical care by restricting device options, increasing financial burden on clinics, and limiting support for research and development. This work aims to establish experimental methods and outcome measures that are more sensitive than traditional methods to balance and mobility changes in prosthesis users. Methods and analysis techniques were developed to probe aspects of balance and mobility control that may be specifically impacted by use of a prosthesis and present challenges similar to those experienced in daily life that could improve the detection of balance and mobility changes. Using the framework of cognitive resource allocation and dual-tasking, this work identified unique characteristics of prosthesis users’ postural control and developed sensitive measures of gait variability. The results also provide broader insight into dual-task analysis and the motor-cognitive response to demanding conditions. Specifically, this work identified altered motor behavior in prosthesis users and high cognitive demand of using a prosthesis. The residual standard deviation method was developed and demonstrated to be more effective than traditional gait variability measures at detecting the impact of dual-tasking. Additionally, spectral analysis of the center of pressure while standing identified altered somatosensory control in prosthesis users. These findings provide a new understanding of prosthetic use and new, highly sensitive techniques to assess balance and mobility in prosthesis users.
ContributorsHoward, Charla Lindley (Author) / Abbas, James (Thesis advisor) / Buneo, Christopher (Committee member) / Lynskey, Jim (Committee member) / Santello, Marco (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
Description
For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery

For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery size should be increased. Another way is to increase the efficiency of the propellers. Previous research shows that ducting a propeller can cause an increase of up to 94 % in the thrust produced by the rotor-duct system. This research focused on developing and testing a quadcopter having a centrally ducted rotor which produces 60 % of the total system thrust and 3 other peripheral rotors. This quadcopter will provide longer flight times while having the same maneuvering flexibility in planar movements.
ContributorsLal, Harsh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Lee, Hyunglae (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
156718-Thumbnail Image.png
Description
Lower-limb wearable assistive robots could alter the users gait kinematics by inputting external power, which can be interpreted as mechanical perturbation to subject normal gait. The change in kinematics may affect the dynamic stability. This work attempts to understand the effects of different physical assistance from these robots on the

Lower-limb wearable assistive robots could alter the users gait kinematics by inputting external power, which can be interpreted as mechanical perturbation to subject normal gait. The change in kinematics may affect the dynamic stability. This work attempts to understand the effects of different physical assistance from these robots on the gait dynamic stability.

A knee exoskeleton and ankle assistive device (Robotic Shoe) are developed and used to provide walking assistance. The knee exoskeleton provides personalized knee joint assistive torque during the stance phase. The robotic shoe is a light-weighted mechanism that can store the potential energy at heel strike and release it by using an active locking mechanism at the terminal stance phase to provide push-up ankle torque and assist the toe-off. Lower-limb Kinematic time series data are collected for subjects wearing these devices in the passive and active mode. The changes of kinematics with and without these devices on lower-limb motion are first studied. Orbital stability, as one of the commonly used measure to quantify gait stability through calculating Floquet Multipliers (FM), is employed to asses the effects of these wearable devices on gait stability. It is shown that wearing the passive knee exoskeleton causes less orbitally stable gait for users, while the knee joint active assistance improves the orbital stability compared to passive mode. The robotic shoe only affects the targeted joint (right ankle) kinematics, and wearing the passive mechanism significantly increases the ankle joint FM values, which indicates less walking orbital stability. More analysis is done on a mechanically perturbed walking public data set, to show that orbital stability can quantify the effects of external mechanical perturbation on gait dynamic stability. This method can further be used as a control design tool to ensure gait stability for users of lower-limb assistive devices.
ContributorsRezayat Sorkhabadi, Seyed Mostafa (Author) / Zhang, Wenlong (Thesis advisor) / Lee, Hyunglae (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2018
133401-Thumbnail Image.png
Description
As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important to understand what causes humans to trust robots more or less during a collaborative task.  This research project aims to

As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important to understand what causes humans to trust robots more or less during a collaborative task.  This research project aims to investigate human-robot trust through a collaborative game of logic that can be played with a human and a robot together. This thesis details the development of a game of logic that could be used for this purpose. The game of logic is based upon a popular game in AI research called ‘Wumpus World’. The original Wumpus World game was a low-interactivity game to be played by humans alone. In this project, the Wumpus World game is modified for a high degree of interactivity with a human player, while also allowing the game to be played simultaneously by an AI algorithm.
ContributorsBoateng, Andrew Owusu (Author) / Sodemann, Angela (Thesis director) / Martin, Thomas (Committee member) / Software Engineering (Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
154718-Thumbnail Image.png
Description
Human walking has been a highly studied topic in research communities because of its extreme importance to human functionality and mobility. A complex system of interconnected gait mechanisms in humans is responsible for generating robust and consistent walking motion over unpredictable ground and through challenging obstacles. One interesting aspect of

Human walking has been a highly studied topic in research communities because of its extreme importance to human functionality and mobility. A complex system of interconnected gait mechanisms in humans is responsible for generating robust and consistent walking motion over unpredictable ground and through challenging obstacles. One interesting aspect of human gait is the ability to adjust in order to accommodate varying surface grades. Typical approaches to investigating this gait function focus on incline and decline surface angles, but most experiments fail to address the effects of surface grades that cause ankle inversion and eversion. There have been several studies of ankle angle perturbation over wider ranges of grade orientations in static conditions; however, these studies do not account for effects during the gait cycle. Furthermore, contemporary studies on this topic neglect critical sources of unnatural stimulus in the design of investigative technology. It is hypothesized that the investigation of ankle angle perturbations in the frontal plane, particularly in the context of inter-leg coordination mechanisms, results in a more complete characterization of the effects of surface grade on human gait mechanisms. This greater understanding could potentially lead to significant applications in gait rehabilitation, especially for individuals who suffer from impairment as a result of stroke. A wearable pneumatic device was designed to impose inversion and eversion perturbations on the ankle through simulated surface grade changes. This prototype device was fabricated, characterized, and tested in order to assess its effectiveness. After testing and characterizing this device, it was used in a series of experiments on human subjects while data was gathered on muscular activation and gait kinematics. The results of the characterization show success in imposing inversion and eversion angle perturbations of approximately 9° with a response time of 0.5 s. Preliminary experiments focusing on inter-leg coordination with healthy human subjects show that one-sided inversion and eversion perturbations have virtually no effect on gait kinematics. However, changes in muscular activation from one-sided perturbations show statistical significance in key lower limb muscles. Thus, the prototype device demonstrates novelty in the context of human gait research for potential applications in rehabilitation.
ContributorsBarkan, Andrew (Author) / Artemiadis, Panagiotis (Thesis advisor) / Lee, Hyunglae (Committee member) / Marvi, Hamidreza (Committee member) / Arizona State University (Publisher)
Created2016
155722-Thumbnail Image.png
Description
A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion

A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion of the required

task. At the same time, human intuition and cognition can prove very useful in

extreme situations where a fast and reliable solution is needed. This idea led to the

creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate

the human element into the control of robotic swarms for increased robustness and

reliability. The aim of the present work is to extend the current state-of-the-art in HSI

by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),

which has proven to be very useful for people with motor disabilities. At first, a

preliminary investigation about the connection of brain activity and the observation

of swarm collective behaviors is conducted. After showing that such a connection

may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.

The system is based on the combination of motor imagery and the input from a game

controller, while its feasibility is proven through an extensive experimental process.

Finally, speech imagery is proposed as an alternative mental task for BCI applications.

This is done through a series of rigorous experiments and appropriate data analysis.

This work suggests that the integration of BCI principles in HSI applications can be

successful and it can potentially lead to systems that are more intuitive for the users

than the current state-of-the-art. At the same time, it motivates further research in

the area and sets the stepping stones for the potential development of the field of

Brain-Swarm Interfaces (BSI).
ContributorsKaravas, Georgios Konstantinos (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring M. (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2017
137772-Thumbnail Image.png
Description
As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.
ContributorsBevilacqua, Vincent Frank (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Trimble, Steven (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05