Matching Items (61)
Filtering by

Clear all filters

153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
157454-Thumbnail Image.png
Description
The Autonomous Vehicle (AV), also known as self-driving car, promises to be a game changer for the transportation industry. This technology is predicted to drastically reduce the number of traffic fatalities due to human error [21].

However, road driving at any reasonable speed involves some risks. Therefore, even with high-tech

The Autonomous Vehicle (AV), also known as self-driving car, promises to be a game changer for the transportation industry. This technology is predicted to drastically reduce the number of traffic fatalities due to human error [21].

However, road driving at any reasonable speed involves some risks. Therefore, even with high-tech AV algorithms and sophisticated sensors, there may be unavoidable crashes due to imperfection of the AV systems, or unexpected encounters with wildlife, children and pedestrians. Whenever there is a risk involved, there is the need for an ethical decision to be made [33].

While ethical and moral decision-making in humans has long been studied by experts, the advent of artificial intelligence (AI) also calls for machine ethics. To study the different moral and ethical decisions made by humans, experts may use the Trolley Problem [34], which is a scenario where one must pull a switch near a trolley track to redirect the trolley to kill one person on the track or do nothing, which will result in the deaths of five people. While it is important to take into account the input of members of a society and perform studies to understand how humans crash during unavoidable accidents to help program moral and ethical decision-making into self-driving cars, using the classical trolley problem is not ideal, as it is unrealistic and does not represent moral situations that people face in the real world.

This work seeks to increase the realism of the classical trolley problem for use in studies on moral and ethical decision-making by simulating realistic driving conditions in an immersive virtual environment with unavoidable crash scenarios, to investigate how drivers crash during these scenarios. Chapter 1 gives an in-depth background into autonomous vehicles and relevant ethical and moral problems; Chapter 2 describes current state-of-the-art online tools and simulators that were developed to study moral decision-making during unavoidable crashes. Chapters 3 focuses on building the simulator and the design of the crash scenarios. Chapter 4 describes human subjects experiments that were conducted with the simulator and their results, and Chapter 5 provides conclusions and avenues for future work.
ContributorsKankam, Immanuella (Author) / Berman, Spring (Thesis advisor) / Johnson, Kathryn (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2019
157457-Thumbnail Image.png
Description
The construction industry is very mundane and tiring for workers without the assistance of machines. This challenge has changed the trend of construction industry tremendously by motivating the development of robots that can replace human workers. This thesis presents a computed torque controller that is designed to produce movements by

The construction industry is very mundane and tiring for workers without the assistance of machines. This challenge has changed the trend of construction industry tremendously by motivating the development of robots that can replace human workers. This thesis presents a computed torque controller that is designed to produce movements by a small-scale, 5 degree-of-freedom (DOF) robotic arm that are useful for construction operations, specifically bricklaying. A software framework for the robotic arm with motion and path planning features and different control capabilities has also been developed using the Robot Operating System (ROS).

First, a literature review of bricklaying construction activity and existing robots’ performance is discussed. After describing an overview of the required robot structure, a mathematical model is presented for the 5-DOF robotic arm. A model-based computed torque controller is designed for the nonlinear dynamic robotic arm, taking into consideration the dynamic and kinematic properties of the arm. For sustainable growth of this technology so that it is affordable to the masses, it is important that the energy consumption by the robot is optimized. In this thesis, the trajectory of the robotic arm is optimized using sequential quadratic programming. The results of the energy optimization procedure are also analyzed for different possible trajectories.

A construction testbed setup is simulated in the ROS platform to validate the designed controllers and optimized robot trajectories on different experimental scenarios. A commercially available 5-DOF robotic arm is modeled in the ROS simulators Gazebo and Rviz. The path and motion planning is performed using the Moveit-ROS interface and also implemented on a physical small-scale robotic arm. A Matlab-ROS framework for execution of different controllers on the physical robot is described. Finally, the results of the controller simulation and experiments are discussed in detail.
ContributorsGandhi, Sushrut (Author) / Berman, Spring (Thesis advisor) / Marvi, Hamidreza (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2019
156560-Thumbnail Image.png
Description
This work considers the design of separating input signals in order to discriminate among a finite number of uncertain nonlinear models. Each nonlinear model corresponds to a system operating mode, unobserved intents of other drivers or robots, or to fault types or attack strategies, etc., and the separating inputs are

This work considers the design of separating input signals in order to discriminate among a finite number of uncertain nonlinear models. Each nonlinear model corresponds to a system operating mode, unobserved intents of other drivers or robots, or to fault types or attack strategies, etc., and the separating inputs are designed such that the output trajectories of all the nonlinear models are guaranteed to be distinguishable from each other under any realization of uncertainties in the initial condition, model discrepancies or noise. I propose a two-step approach. First, using an optimization-based approach, we over-approximate nonlinear dynamics by uncertain affine models, as abstractions that preserve all its system behaviors such that any discrimination guarantees for the affine abstraction also hold for the original nonlinear system. Then, I propose a novel solution in the form of a mixed-integer linear program (MILP) to the active model discrimination problem for uncertain affine models, which includes the affine abstraction and thus, the nonlinear models. Finally, I demonstrate the effectiveness of our approach for identifying the intention of other vehicles in a highway lane changing scenario. For the abstraction, I explore two approaches. In the first approach, I construct the bounding planes using a Mixed-Integer Nonlinear Problem (MINLP) formulation of the given system with appropriately designed constraints. For the second approach, I solve a linear programming (LP) problem that over-approximates the nonlinear function at only the grid points of a mesh with a given resolution and then accounting for the entire domain via an appropriate correction term. To achieve a desired approximation accuracy, we also iteratively subdivide the domain into subregions. This method applies to nonlinear functions with different degrees of smoothness, including Lipschitz continuous functions, and improves on existing approaches by enabling the use of tighter bounds. Finally, we compare the effectiveness of this approach with the existing optimization-based methods in simulation and illustrate its applicability for estimator design.
ContributorsSingh, Kanishka Raj (Author) / Yong, Sze Zheng (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2018
156562-Thumbnail Image.png
Description
This thesis presents an autonomous vehicle test bed which can be used to conduct studies on the interaction between human-driven vehicles and autonomous vehicles on the road. The test bed will make use of a fleet of robots which is a microcosm of an autonomous vehicle performing all the vital

This thesis presents an autonomous vehicle test bed which can be used to conduct studies on the interaction between human-driven vehicles and autonomous vehicles on the road. The test bed will make use of a fleet of robots which is a microcosm of an autonomous vehicle performing all the vital tasks like lane following, traffic signal obeying and collision avoidance with other vehicles on the road. The robots use real-time image processing and closed-loop control techniques to achieve automation. The testbed also features a manual control mode where a user can choose to control the car with a joystick by viewing a video relayed to the control station. Stochastic rogue vehicle processes will be introduced into the system which will emulate random behaviors in an autonomous vehicle. The test bed was experimented to perform a comparative study of driving capabilities of the miniature self-driving car and a human driver.
ContributorsSubramanyam, Rakshith (Author) / Berman, Spring (Thesis advisor) / Yu, Honbin (Thesis advisor) / Jayasurya, Suren (Committee member) / Arizona State University (Publisher)
Created2018
136627-Thumbnail Image.png
Description
This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an

This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an ultimate goal, it was hoped that this research will contribute to optimizing human-swarm interaction for the design of human supervisory controllers for swarms. To achieve the stated goals, two investigations were conducted. First, subjects gaze was tracked while observing a simulated swarm as it moved across the screen. This swarm changed in size, disturbance level in the position of the agents, speed, and path curvature. Second, subjects were asked to play a supervisory role as they watched a swarm move across the screen toward targets. The subjects determined whether a collision would occur and with which target while their responses as well as their gaze was tracked. In the case of an observatory role, a model of human gaze was created. This was embodied in a second order model similar to that of a spring-mass-damper system. This model was similar across subjects and stable. In the case of a supervisory role, inherent weaknesses in human perception were found, such as the inability to predict future position of curved paths. These findings are discussed in depth within the thesis. Overall, the results presented suggest that understanding human perception of swarms offers a new approach to the problem of swarm control. The ability to adapt controls to the strengths and weaknesses could lead to great strides in the reduction of operators in the control of one UAV, resulting in a move towards one man operation of a swarm.
ContributorsWhitton, Elena Michelle (Author) / Artemiadis, Panagiotis (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
133580-Thumbnail Image.png
Description
In this paper, we propose an autonomous throwing and catching system to be developed as a preliminary step towards the refinement of a robotic arm capable of improving strength and motor function in the limb. This will be accomplished by first autonomizing simpler movements, such as throwing a ball. In

In this paper, we propose an autonomous throwing and catching system to be developed as a preliminary step towards the refinement of a robotic arm capable of improving strength and motor function in the limb. This will be accomplished by first autonomizing simpler movements, such as throwing a ball. In this system, an autonomous thrower will detect a desired target through the use of image processing. The launch angle and direction necessary to hit the target will then be calculated, followed by the launching of the ball. The smart catcher will then detect the ball as it is in the air, calculate its expected landing location based on its initial trajectory, and adjust its position so that the ball lands in the center of the target. The thrower will then proceed to compare the actual landing position with the position where it expected the ball to land, and adjust its calculations accordingly for the next throw. By utilizing this method of feedback, the throwing arm will be able to automatically correct itself. This means that the thrower will ideally be able to hit the target exactly in the center within a few throws, regardless of any additional uncertainty in the system. This project will focus of the controller and image processing components necessary for the autonomous throwing arm to be able to detect the position of the target at which it will be aiming, and for the smart catcher to be able to detect the position of the projectile and estimate its final landing position by tracking its current trajectory.
ContributorsLundberg, Kathie Joy (Co-author) / Thart, Amanda (Co-author) / Rodriguez, Armando (Thesis director) / Berman, Spring (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134544-Thumbnail Image.png
Description
This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and

This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and implement a model simulation and an experimental testbed. To achieve this goal, a simulation of the boundary coverage control strategy was first developed. This simulated model allowed for concept verification for different robot groups and boundary designs. The simulation consisted of a single, constantly expanding circular boundary with a modeled swarm of robots that autonomously allocate themselves around the boundary. Ultimately, this simulation was implemented in an experimental testbed consisting of mobile robots and a moving boundary wall to exhibit the behaviors of the simulated robots. The conclusions from this experiment are hoped to help make further advancements to swarm robotic technology. The results presented show promise for future progress in adaptive control strategies for robotic swarms.
ContributorsMurphy, Hunter Nicholas (Author) / Berman, Spring (Thesis director) / Marvi, Hamid (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132909-Thumbnail Image.png
Description
This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.
ContributorsMohr, Brennan (Author) / Berman, Spring (Thesis director) / Ren, Yi (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132937-Thumbnail Image.png
Description
In the next decade or so, there will be a shift in the industry of transportation across the world. Already today we have autonomous vehicles (AVs) tested in the Greater Phoenix area showing that the technology has improved to a level available to the public eye. Although this technology is

In the next decade or so, there will be a shift in the industry of transportation across the world. Already today we have autonomous vehicles (AVs) tested in the Greater Phoenix area showing that the technology has improved to a level available to the public eye. Although this technology is not yet released commercially (for the most part), it is being used and will continue to be used to develop a safer future. With a high incidence of human error causing accidents, many expect that autonomous vehicles will be safer than human drivers. They do still require driver attention and sometimes intervention to ensure safety, but for the most part are much safer. In just the United States alone, there were 40,000 deaths due to car accidents last year [1]. If traffic fatalities were considered a disease, this would be an epidemic. The technology behind autonomous vehicles will allow for a much safer environment and increased mobility and independence for people who cannot drive and struggle with public transport. There are many opportunities for autonomous vehicles in the transportation industry. Companies can save a lot more money on shipping by cutting the costs of human drivers and trucks on the road, even allowing for simpler drop shipments should the necessary AI be developed.Research is even being done by several labs at Arizona State University. For example, Dr. Spring Berman’s Autonomous Collective Systems Lab has been collaborating with Dr. Nancy Cooke of Human Systems Engineering to develop a traffic testbed, CHARTopolis, to study the risks of driver-AV interactions and the psychological effects of AVs on human drivers on a small scale. This testbed will be used by researchers from their labs and others to develop testing on reaction, trust, and user experience with AVs in a safe environment that simulates conditions similar to those experienced by full-size AVs. Using a new type of small robot that emulates an AV, developed in Dr. Berman’s lab, participants will be able to remotely drive around a model city environment and interact with other AV-like robots using the cameras and LiDAR sensors on the remotely driven robot to guide them.
Although these commercial and research systems are still in testing, it is important to understand how AVs are being marketed to the general public and how they are perceived, so that one day they may be effectively adopted into everyday life. People do not want to see a car they do not trust on the same roads as them, so the questions are: why don’t people trust them, and how can companies and researchers improve the trustworthiness of the vehicles?
ContributorsShuster, Daniel Nadav (Author) / Berman, Spring (Thesis director) / Cooke, Nancy (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05