Matching Items (139)
Filtering by

Clear all filters

153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153533-Thumbnail Image.png
Description
As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and

As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and proven to be a potential method to fit some of these niche areas. There have been some difficulties in producing terrain following controllers that maintain robust, steady state, which are disturbance resistant.

The following thesis will discuss a controller which has shown the ability to produce these desired properties. A phase angle oscillator controller is shown to work remarkably well, both in simulation and with a one degree of freedom robotic test stand.

Work was also done with an experimental quadruped with less successful results, but which did show potential for stability. Additional work is suggested for the quadruped.
ContributorsNew, Philip Wesley (Author) / Sugar, Thomas G. (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2015
153189-Thumbnail Image.png
Description
Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user.

Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user. Many human tasks such as walking, running, and hopping are repeating or cyclic tasks and a robot can add energy in sync with the repeating pattern for assistance. A method has been developed to add energy at the appropriate time to the repeating limit cycle based on a phase oscillator. The phase oscillator eliminates time from the forcing function which is based purely on the motion of the user. This approach has been simulated, implemented and tested in a robotic backpack which facilitates carrying heavy loads. The device oscillates the load of the backpack, based on the motion of the user, in order to add energy at the correct time and thus reduce the amount of energy required for walking with a heavy load. Models were developed in Working Model 2-D, a dynamics simulation software, in conjunction with MATLAB to verify theory and test control methods. The control system developed is robust and has successfully operated on a range of different users, each with their own different and distinct gait. The results of experimental testing validated the corresponding models.
ContributorsWheeler, Chase (Author) / Sugar, Thomas G. (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2014
153270-Thumbnail Image.png
Description
Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer

Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer vision,

robotics, reconnaissance, astrophotography, surveillance and automotive applications.

The images captured from such cameras can be corrected for their distortion if the

cameras are calibrated and the distortion function is determined. Calibration also allows

fisheye cameras to be used in tasks involving metric scene measurement, metric

scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.

This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
ContributorsKashyap Takmul Purushothama Raju, Vinay (Author) / Karam, Lina (Thesis advisor) / Turaga, Pavan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
153240-Thumbnail Image.png
Description
Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with

Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with the human gait cycle. Applying supplemental external and internal forces to the human body during the gait cycle has been shown to decrease the metabolic cost for walking, allowing individuals to carry additional weight and walk further distances. Significant research has been conducted to reduce the metabolic cost of walking, however, there are few if any documented studies that focus specifically on reducing the metabolic cost associated with high speed running. Three mechanical systems were designed to work in concert with the human user to decrease metabolic cost and increase the range and speeds at which a human can run.

The methods of design require a focus on mathematical modeling, simulations, and metabolic cost. Mathematical modeling and simulations are used to aid in the design process of robotic systems and metabolic testing is regarded as the final analysis process to determine the true effectiveness of robotic prototypes. Metabolic data, (VO2) is the volumetric consumption of oxygen, per minute, per unit mass (ml/min/kg). Metabolic testing consists of analyzing the oxygen consumption of a test subject while performing a task naturally and then comparing that data with analyzed oxygen consumption of the same task while using an assistive device.

Three devices were designed and tested to augment high speed running. The first device, AirLegs V1, is a mostly aluminum exoskeleton with two pneumatic linear actuators connecting from the lower back directly to the user's thighs, allowing the device to induce a torque on the leg by pushing and pulling on the user's thigh during running. The device also makes use of two smaller pneumatic linear actuators which drive cables connecting to small lever arms at the back of the heel, inducing a torque at the ankles. Device two, AirLegs V2, is also pneumatically powered but is considered to be a soft suit version of the first device. It uses cables to interface the forces created by actuators located vertically on the user's back. These cables then connect to the back of the user's knees resulting in greater flexibility and range of motion of the legs. Device three, a Jet Pack, produces an external force against the user's torso to propel a user forward and upward making it easier to run. Third party testing, pilot demonstrations and timed trials have demonstrated that all three of the devices effectively reduce the metabolic cost of running below that of natural running with no device.
ContributorsKerestes, Jason (Author) / Sugar, Thomas (Thesis advisor) / Redkar, Sangram (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2014
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150828-Thumbnail Image.png
Description
Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode

Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode array, were used to understand how grip force, contact angle, object texture, and slip direction may be encoded in the sensor data. Findings show that slip induced under conditions of high contact angles and grip forces resulted in significant changes in both AC and DC pressure magnitude and rate of change in pressure. Slip induced under conditions of low contact angles and grip forces resulted in significant changes in the rate of change in electrode impedance. Slip in the distal direction of a precision grip caused significant changes in pressure magnitude and rate of change in pressure, while slip in the radial direction of the wrist caused significant changes in the rate of change in electrode impedance. A strong relationship was established between slip direction and the rate of change in ratios of electrode impedance for radial and ulnar slip relative to the wrist. Consequently, establishing multiple thresholds or establishing a multivariate model may be a useful method for detecting and characterizing slip. Detecting slip for low contact angles could be done by monitoring electrode data, while detecting slip for high contact angles could be done by monitoring pressure data. Predicting slip in the distal direction could be done by monitoring pressure data, while predicting slip in the radial and ulnar directions could be done by monitoring electrode data.
ContributorsHsia, Albert (Author) / Santos, Veronica J (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen I (Committee member) / Arizona State University (Publisher)
Created2012
154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
ContributorsNarayanan, Vignesh (Author) / Kambhampati, Subbarao (Thesis advisor) / Zhang, Yu (Thesis advisor) / Cooke, Nancy J. (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
156156-Thumbnail Image.png
Description
The ultimate goal of human movement control research is to understand how natural movements performed in daily reaching activities, are controlled. Natural movements require coordination of multiple degrees of freedom (DOF) of the arm. Patterns of arm joint control were studied during daily functional tasks, which were performed through the

The ultimate goal of human movement control research is to understand how natural movements performed in daily reaching activities, are controlled. Natural movements require coordination of multiple degrees of freedom (DOF) of the arm. Patterns of arm joint control were studied during daily functional tasks, which were performed through the rotation of seven DOF in the arm. Analyzed movements which imitated the following 3 activities of daily living: moving an empty soda can from a table and placing it on a further position; placing the empty soda can from initial position at table to a position at shoulder level on a shelf; and placing the empty soda can from initial position at table to a position at eye level on a shelf. Kinematic and kinetic analyses were conducted for these three movements. The studied kinematic characteristics were: hand trajectory in the sagittal plane, displacements of the 7 DOF, and contribution of each DOF to hand velocity. The kinetic analysis involved computation of 3-dimensional vectors of muscle torque (MT), interaction torque (IT), gravity torque (GT), and net torque (NT) at the shoulder, elbow, and wrist. Using the relationship NT = MT + GT + IT, the role of active control and passive factors (gravitation and inter-segmental dynamics) in rotation of each joint by computing MT contribution (MTC) to NT was assessed. MTC was computed using the ratio of the signed MT projection on NT to NT magnitude. Despite a variety of joint movements available across the different tasks, 3 patterns of shoulder and elbow coordination prevailed in each movement: 1) active rotation of the shoulder and predominantly passive rotation of the elbow; 2) active rotation of the elbow and predominantly passive rotation of the shoulder; and 3) passive rotation of both joints. Analysis of wrist control suggested that MT mainly compensates for passive torque and provides adjustment of wrist motion according to requirements of each task. In conclusion, it was observed that the 3 shoulder-elbow coordination patterns (during which at least one joint moved) passively represented joint control primitives, underlying the performance of well-learned arm movements, although these patterns may be less prevalent during non-habitual movements.
ContributorsSansgiri, Dattaraj (Author) / Dounskaia, Natalia (Thesis advisor) / Schaefer, Sydney (Thesis advisor) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2018