Matching Items (26)
Filtering by

Clear all filters

156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
156771-Thumbnail Image.png
Description
Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert

Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert and, as a result, the scope of a robot's autonomy and ability to safely explore and learn in new and unforeseen environments is constrained by the specifics of the designed reward function. In this thesis, I design and implement a stateful collision anticipation model with powerful predictive capability based upon my research of sequential data modeling and modern recurrent neural networks. I also develop deep reinforcement learning methods whose rewards are generated by self-supervised training and intrinsic signals. The main objective is to work towards the development of resilient robots that can learn to anticipate and avoid damaging interactions by combining visual and proprioceptive cues from internal sensors. The introduced solutions are inspired by pain pathways in humans and animals, because such pathways are known to guide decision-making processes and promote self-preservation. A new "robot dodge ball' benchmark is introduced in order to test the validity of the developed algorithms in dynamic environments.
ContributorsRichardson, Trevor W (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2018
134678-Thumbnail Image.png
Description
Many industries require workers in warehouse and stockroom environments to perform frequent lifting tasks. Over time these repeated tasks can lead to excess strain on the worker's body and reduced productivity. This project seeks to develop an exoskeletal wrist fixture to be used in conjunction with a powered exoskeleton arm

Many industries require workers in warehouse and stockroom environments to perform frequent lifting tasks. Over time these repeated tasks can lead to excess strain on the worker's body and reduced productivity. This project seeks to develop an exoskeletal wrist fixture to be used in conjunction with a powered exoskeleton arm to aid workers performing box lifting types of tasks. Existing products aimed at improving worker comfort and productivity typically employ either fully powered exoskeleton suits or utilize minimally powered spring arms and/or fixtures. These designs either reduce stress to the user's body through powered arms and grippers operated via handheld controls which have limited functionality, or they use a more minimal setup that reduces some load, but exposes the user's hands and wrists to injury by directing support to the forearm. The design proposed here seeks to strike a balance between size, weight, and power requirements and also proposes a novel wrist exoskeleton design which minimizes stress on the user's wrists by directly interfacing with the object to be picked up. The design of the wrist exoskeleton was approached through initially selecting degrees of freedom and a ROM (range of motion) to accommodate. Feel and functionality were improved through an iterative prototyping process which yielded two primary designs. A novel "clip-in" method was proposed to allow the user to easily attach and detach from the exoskeleton. Designs utilized a contact surface intended to be used with dry fibrillary adhesives to maximize exoskeleton grip. Two final designs, which used two pivots in opposite kinematic order, were constructed and tested to determine the best kinematic layout. The best design had two prototypes created to be worn with passive test arms that attached to the user though a specially designed belt.
ContributorsGreason, Kenneth Berend (Author) / Sugar, Thomas (Thesis director) / Holgate, Matthew (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134817-Thumbnail Image.png
Description
For the past two decades, advanced Limb Gait Simulators and Exoskeletons have been developed to improve walking rehabilitation. A Limb Gait Simulator is used to analyze the human step cycle and/or assist a user walking on a treadmill. Most modern limb gait simulators, such as ALEX, have proven themselves effective

For the past two decades, advanced Limb Gait Simulators and Exoskeletons have been developed to improve walking rehabilitation. A Limb Gait Simulator is used to analyze the human step cycle and/or assist a user walking on a treadmill. Most modern limb gait simulators, such as ALEX, have proven themselves effective and reliable through their usage of motors, springs, cables, elastics, pneumatics and reaction loads. These mechanisms apply internal forces and reaction loads to the body. On the other hand, external forces are those caused by an external agent outside the system such as air, water, or magnets. A design for an exoskeleton using external forces has seldom been attempted by researchers. This thesis project focuses on the development of a Limb Gait Simulator based on a Pure External Force and has proven its effectiveness in generating torque on the human leg. The external force is generated through air propulsion using an Electric Ducted Fan (EDF) motor. Such a motor is typically used for remote control airplanes, but their applications can go beyond this. The objective of this research is to generate torque on the human leg through the control of the EDF engines thrust and the opening/closing of the reverse thruster flaps. This device qualifies as "assist as needed"; the user is entirely in control of how much assistance he or she may want. Static thrust values for the EDF engine are recorded using a thrust test stand. The product of the thrust (N) and the distance on the thigh (m) is the resulting torque. With the motor running at maximum RPM, the highest torque value reached was that of 3.93 (Nm). The motor EDF motor is powered by a 6S 5000 mAh LiPo battery. This torque value could be increased with the usage of a second battery connected in series, but this comes at a price. The designed limb gait simulator demonstrates that external forces, such as air, could have potential in the development of future rehabilitation devices.
ContributorsToulouse, Tanguy Nathan (Author) / Sugar, Thomas (Thesis director) / Artemiadis, Panagiotis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155378-Thumbnail Image.png
Description
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub- networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.
ContributorsSur, Indranil (Author) / Amor, Heni B (Thesis advisor) / Fainekos, Georgios (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155806-Thumbnail Image.png
Description
In order for assistive mobile robots to operate in the same environment as humans, they must be able to navigate the same obstacles as humans do. Many elements are required to do this: a powerful controller which can understand the obstacle, and power-dense actuators which will be able to achieve

In order for assistive mobile robots to operate in the same environment as humans, they must be able to navigate the same obstacles as humans do. Many elements are required to do this: a powerful controller which can understand the obstacle, and power-dense actuators which will be able to achieve the necessary limb accelerations and output energies. Rapid growth in information technology has made complex controllers, and the devices which run them considerably light and cheap. The energy density of batteries, motors, and engines has not grown nearly as fast. This is problematic because biological systems are more agile, and more efficient than robotic systems. This dissertation introduces design methods which may be used optimize a multiactuator robotic limb's natural dynamics in an effort to reduce energy waste. These energy savings decrease the robot's cost of transport, and the weight of the required fuel storage system. To achieve this, an optimal design method, which allows the specialization of robot geometry, is introduced. In addition to optimal geometry design, a gearing optimization is presented which selects a gear ratio which minimizes the electrical power at the motor while considering the constraints of the motor. Furthermore, an efficient algorithm for the optimization of parallel stiffness elements in the robot is introduced. In addition to the optimal design tools introduced, the KiTy SP robotic limb structure is also presented. Which is a novel hybrid parallel-serial actuation method. This novel leg structure has many desirable attributes such as: three dimensional end-effector positioning, low mobile mass, compact form-factor, and a large workspace. We also show that the KiTy SP structure outperforms the classical, biologically-inspired serial limb structure.
ContributorsCahill, Nathan M (Author) / Sugar, Thomas (Thesis advisor) / Ren, Yi (Thesis advisor) / Holgate, Matthew (Committee member) / Berman, Spring (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
168739-Thumbnail Image.png
Description
Visual navigation is a useful and important task for a variety of applications. As the preva­lence of robots increase, there is an increasing need for energy-­efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the lit­erature, but there is a lack of work

Visual navigation is a useful and important task for a variety of applications. As the preva­lence of robots increase, there is an increasing need for energy-­efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the lit­erature, but there is a lack of work on evaluation of the efficiency of the image sensors. In this thesis, two methods are evaluated: adaptive image sensor quantization for traditional camera pipelines as well as new event­-based sensors for low­-power computer vision.The first contribution in this thesis is an evaluation of performing varying levels of sen­sor linear and logarithmic quantization with the task of visual simultaneous localization and mapping (SLAM). This unconventional method can provide efficiency benefits with a trade­ off between accuracy of the task and energy-­efficiency. A new sensor quantization method, gradient­-based quantization, is introduced to improve the accuracy of the task. This method only lowers the bit level of parts of the image that are less likely to be important in the SLAM algorithm since lower bit levels signify better energy­-efficiency, but worse task accuracy. The third contribution is an evaluation of the efficiency and accuracy of event­-based camera inten­sity representations for the task of optical flow. The results of performing a learning based optical flow are provided for each of five different reconstruction methods along with ablation studies. Lastly, the challenges of an event feature­-based SLAM system are presented with re­sults demonstrating the necessity for high quality and high­ resolution event data. The work in this thesis provides studies useful for examining trade­offs for an efficient visual navigation system with traditional and event vision sensors. The results of this thesis also provide multiple directions for future work.
ContributorsChristie, Olivia Catherine (Author) / Jayasuriya, Suren (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
187693-Thumbnail Image.png
Description
Simultaneous localization and mapping (SLAM) has traditionally relied on low-level geometric or optical features. However, these features-based SLAM methods often struggle with feature-less or repetitive scenes. Additionally, low-level features may not provide sufficient information for robot navigation and manipulation, leaving robots without a complete understanding of the 3D spatial world.

Simultaneous localization and mapping (SLAM) has traditionally relied on low-level geometric or optical features. However, these features-based SLAM methods often struggle with feature-less or repetitive scenes. Additionally, low-level features may not provide sufficient information for robot navigation and manipulation, leaving robots without a complete understanding of the 3D spatial world. Advanced information is necessary to address these limitations. Fortunately, recent developments in learning-based 3D reconstruction allow robots to not only detect semantic meanings, but also recognize the 3D structure of objects from a few images. By combining this 3D structural information, SLAM can be improved from a low-level approach to a structure-aware approach. This work propose a novel approach for multi-view 3D reconstruction using recurrent transformer. This approach allows robots to accumulate information from multiple views and encode them into a compact latent space. The resulting latent representations are then decoded to produce 3D structural landmarks, which can be used to improve robot localization and mapping.
ContributorsHuang, Chi-Yao (Author) / Yang, Yezhou (Thesis advisor) / Turaga, Pavan (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2023
168406-Thumbnail Image.png
Description
Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or

Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or handle variability is an expensive, complex, and time-consuming process. However, with the advent of more complex sensors and algorithms, overcoming these limitations becomes within reach. This work proposes innovations in artificial intelligence, language understanding, and multimodal integration to enable next-generation grasping and manipulation capabilities in autonomous robots. The underlying thesis is that multimodal observations and instructions can drastically expand the responsiveness and dexterity of robot manipulators. Natural language, in particular, can be used to enable intuitive, bidirectional communication between a human user and the machine. To this end, this work presents a system that learns context-aware robot control policies from multimodal human demonstrations. Among the main contributions presented are techniques for (a) collecting demonstrations in an efficient and intuitive fashion, (b) methods for leveraging physical contact with the environment and objects, (c) the incorporation of natural language to understand context, and (d) the generation of robust robot control policies. The presented approach and systems are evaluated in multiple grasping and manipulation settings ranging from dexterous manipulation to pick-and-place, as well as contact-rich bimanual insertion tasks. Moreover, the usability of these innovations, especially when utilizing human task demonstrations and communication interfaces, is evaluated in several human-subject studies.
ContributorsStepputtis, Simon (Author) / Ben Amor, Heni (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Lee, Stefan (Committee member) / Arizona State University (Publisher)
Created2021
156281-Thumbnail Image.png
Description
Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time.

Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time. The focus of this work is to develop a new method of energy storage and charging for autonomous UAV systems, for use during long-term deployments in a constrained environment. We developed a charging solution that allows pre-equipped UAV system to land on top of designated charging pads and rapidly replenish their battery reserves, using a contact charging point. This system is designed to work with all types of rechargeable batteries, focusing on Lithium Polymer (LiPo) packs, that incorporate a battery management system for increased reliability. The project also explores optimization methods for fleets of UAV systems, to increase charging efficiency and extend battery lifespans. Each component of this project was first designed and tested in computer simulation. Following positive feedback and results, prototypes for each part of this system were developed and rigorously tested. Results show that the contact charging method is able to charge LiPo batteries at a 1-C rate, which is the industry standard rate, maintaining the same safety and efficiency standards as modern day direct connection chargers. Control software for these base stations was also created, to be integrated with a fleet management system, and optimizes UAV charge levels and distribution to extend LiPo battery lifetimes while still meeting expected mission demand. Each component of this project (hardware/software) was designed for manufacturing and implementation using industry standard tools, making it ideal for large-scale implementations. This system has been successfully tested with a fleet of UAV systems at Arizona State University, and is currently being integrated into an Arizona smart city environment for deployment.
ContributorsMian, Sami (Author) / Panchanathan, Sethuraman (Thesis advisor) / Berman, Spring (Committee member) / Yang, Yezhou (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2018