Matching Items (19)
Filtering by
- All Subjects: engineering
- Creators: Berman, Spring
Description
This work presents the integration of user intent detection and control in the development of the fluid-driven, wearable, and continuum, Soft Poly-Limb (SPL). The SPL utilizes the numerous traits of soft robotics to enable a novel approach to provide safe and compliant mobile manipulation assistance to healthy and impaired users. This wearable system equips the user with an additional limb made of soft materials that can be controlled to produce complex three-dimensional motion in space, like its biological counterparts with hydrostatic muscles. Similar to the elephant trunk, the SPL is able to manipulate objects using various end effectors, such as suction adhesion or a soft grasper, and can also wrap its entire length around objects for manipulation. User control of the limb is demonstrated using multiple user intent detection modalities. Further, the performance of the SPL studied by testing its capability to interact safely and closely around a user through a spatial mobility test. Finally, the limb’s ability to assist the user is explored through multitasking scenarios and pick and place tests with varying mounting locations of the arm around the user’s body. The results of these assessments demonstrate the SPL’s ability to safely interact with the user while exhibiting promising performance in assisting the user with a wide variety of tasks, in both work and general living scenarios.
ContributorsVale, Nicholas Marshall (Author) / Polygerinos, Panagiotis (Thesis advisor) / Zhang, Wenlong (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2018
Description
The Autonomous Vehicle (AV), also known as self-driving car, promises to be a game changer for the transportation industry. This technology is predicted to drastically reduce the number of traffic fatalities due to human error [21].
However, road driving at any reasonable speed involves some risks. Therefore, even with high-tech AV algorithms and sophisticated sensors, there may be unavoidable crashes due to imperfection of the AV systems, or unexpected encounters with wildlife, children and pedestrians. Whenever there is a risk involved, there is the need for an ethical decision to be made [33].
While ethical and moral decision-making in humans has long been studied by experts, the advent of artificial intelligence (AI) also calls for machine ethics. To study the different moral and ethical decisions made by humans, experts may use the Trolley Problem [34], which is a scenario where one must pull a switch near a trolley track to redirect the trolley to kill one person on the track or do nothing, which will result in the deaths of five people. While it is important to take into account the input of members of a society and perform studies to understand how humans crash during unavoidable accidents to help program moral and ethical decision-making into self-driving cars, using the classical trolley problem is not ideal, as it is unrealistic and does not represent moral situations that people face in the real world.
This work seeks to increase the realism of the classical trolley problem for use in studies on moral and ethical decision-making by simulating realistic driving conditions in an immersive virtual environment with unavoidable crash scenarios, to investigate how drivers crash during these scenarios. Chapter 1 gives an in-depth background into autonomous vehicles and relevant ethical and moral problems; Chapter 2 describes current state-of-the-art online tools and simulators that were developed to study moral decision-making during unavoidable crashes. Chapters 3 focuses on building the simulator and the design of the crash scenarios. Chapter 4 describes human subjects experiments that were conducted with the simulator and their results, and Chapter 5 provides conclusions and avenues for future work.
However, road driving at any reasonable speed involves some risks. Therefore, even with high-tech AV algorithms and sophisticated sensors, there may be unavoidable crashes due to imperfection of the AV systems, or unexpected encounters with wildlife, children and pedestrians. Whenever there is a risk involved, there is the need for an ethical decision to be made [33].
While ethical and moral decision-making in humans has long been studied by experts, the advent of artificial intelligence (AI) also calls for machine ethics. To study the different moral and ethical decisions made by humans, experts may use the Trolley Problem [34], which is a scenario where one must pull a switch near a trolley track to redirect the trolley to kill one person on the track or do nothing, which will result in the deaths of five people. While it is important to take into account the input of members of a society and perform studies to understand how humans crash during unavoidable accidents to help program moral and ethical decision-making into self-driving cars, using the classical trolley problem is not ideal, as it is unrealistic and does not represent moral situations that people face in the real world.
This work seeks to increase the realism of the classical trolley problem for use in studies on moral and ethical decision-making by simulating realistic driving conditions in an immersive virtual environment with unavoidable crash scenarios, to investigate how drivers crash during these scenarios. Chapter 1 gives an in-depth background into autonomous vehicles and relevant ethical and moral problems; Chapter 2 describes current state-of-the-art online tools and simulators that were developed to study moral decision-making during unavoidable crashes. Chapters 3 focuses on building the simulator and the design of the crash scenarios. Chapter 4 describes human subjects experiments that were conducted with the simulator and their results, and Chapter 5 provides conclusions and avenues for future work.
ContributorsKankam, Immanuella (Author) / Berman, Spring (Thesis advisor) / Johnson, Kathryn (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2019
Description
Robotic swarms can potentially perform complicated tasks such as exploration and mapping at large space and time scales in a parallel and robust fashion. This thesis presents strategies for mapping environmental features of interest – specifically obstacles, collision-free paths, generating a metric map and estimating scalar density fields– in an unknown domain using data obtained by a swarm of resource-constrained robots. First, an approach was developed for mapping a single obstacle using a swarm of point-mass robots with both directed and random motion. The swarm population dynamics are modeled by a set of advection-diffusion-reaction partial differential equations (PDEs) in which a spatially-dependent indicator function marks the presence or absence of the obstacle in the domain. The indicator function is estimated by solving an optimization problem with PDEs as constraints. Second, a methodology for constructing a topological map of an unknown environment was proposed, which indicates collision-free paths for navigation, from data collected by a swarm of finite-sized robots. As an initial step, the number of topological features in the domain was quantified by applying tools from algebraic topology, to a probability function over the explored region that indicates the presence of obstacles. A topological map of the domain is then generated using a graph-based wave propagation algorithm. This approach is further extended, enabling the technique to construct a metric map of an unknown domain with obstacles using uncertain position data collected by a swarm of resource-constrained robots, filtered using intensity measurements of an external signal. Next, a distributed method was developed to construct the occupancy grid map of an unknown environment using a swarm of inexpensive robots or mobile sensors with limited communication. In addition to this, an exploration strategy which combines information theoretic ideas with Levy walks was also proposed. Finally, the problem of reconstructing a two-dimensional scalar field using observations from a subset of a sensor network in which each node communicates its local measurements to its neighboring nodes was addressed. This problem reduces to estimating the initial condition of a large interconnected system with first-order linear dynamics, which can be solved as an optimization problem.
ContributorsRamachandran, Ragesh Kumar (Author) / Berman, Spring M (Thesis advisor) / Mignolet, Marc (Committee member) / Artemiadis, Panagiotis (Committee member) / Marvi, Hamid (Committee member) / Robinson, Michael (Committee member) / Arizona State University (Publisher)
Created2018
Description
In this paper, we propose an autonomous throwing and catching system to be developed as a preliminary step towards the refinement of a robotic arm capable of improving strength and motor function in the limb. This will be accomplished by first autonomizing simpler movements, such as throwing a ball. In this system, an autonomous thrower will detect a desired target through the use of image processing. The launch angle and direction necessary to hit the target will then be calculated, followed by the launching of the ball. The smart catcher will then detect the ball as it is in the air, calculate its expected landing location based on its initial trajectory, and adjust its position so that the ball lands in the center of the target. The thrower will then proceed to compare the actual landing position with the position where it expected the ball to land, and adjust its calculations accordingly for the next throw. By utilizing this method of feedback, the throwing arm will be able to automatically correct itself. This means that the thrower will ideally be able to hit the target exactly in the center within a few throws, regardless of any additional uncertainty in the system. This project will focus of the controller and image processing components necessary for the autonomous throwing arm to be able to detect the position of the target at which it will be aiming, and for the smart catcher to be able to detect the position of the projectile and estimate its final landing position by tracking its current trajectory.
ContributorsLundberg, Kathie Joy (Co-author) / Thart, Amanda (Co-author) / Rodriguez, Armando (Thesis director) / Berman, Spring (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.
ContributorsMohr, Brennan (Author) / Berman, Spring (Thesis director) / Ren, Yi (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
In the next decade or so, there will be a shift in the industry of transportation across the world. Already today we have autonomous vehicles (AVs) tested in the Greater Phoenix area showing that the technology has improved to a level available to the public eye. Although this technology is not yet released commercially (for the most part), it is being used and will continue to be used to develop a safer future. With a high incidence of human error causing accidents, many expect that autonomous vehicles will be safer than human drivers. They do still require driver attention and sometimes intervention to ensure safety, but for the most part are much safer. In just the United States alone, there were 40,000 deaths due to car accidents last year [1]. If traffic fatalities were considered a disease, this would be an epidemic. The technology behind autonomous vehicles will allow for a much safer environment and increased mobility and independence for people who cannot drive and struggle with public transport. There are many opportunities for autonomous vehicles in the transportation industry. Companies can save a lot more money on shipping by cutting the costs of human drivers and trucks on the road, even allowing for simpler drop shipments should the necessary AI be developed.Research is even being done by several labs at Arizona State University. For example, Dr. Spring Berman’s Autonomous Collective Systems Lab has been collaborating with Dr. Nancy Cooke of Human Systems Engineering to develop a traffic testbed, CHARTopolis, to study the risks of driver-AV interactions and the psychological effects of AVs on human drivers on a small scale. This testbed will be used by researchers from their labs and others to develop testing on reaction, trust, and user experience with AVs in a safe environment that simulates conditions similar to those experienced by full-size AVs. Using a new type of small robot that emulates an AV, developed in Dr. Berman’s lab, participants will be able to remotely drive around a model city environment and interact with other AV-like robots using the cameras and LiDAR sensors on the remotely driven robot to guide them.
Although these commercial and research systems are still in testing, it is important to understand how AVs are being marketed to the general public and how they are perceived, so that one day they may be effectively adopted into everyday life. People do not want to see a car they do not trust on the same roads as them, so the questions are: why don’t people trust them, and how can companies and researchers improve the trustworthiness of the vehicles?
Although these commercial and research systems are still in testing, it is important to understand how AVs are being marketed to the general public and how they are perceived, so that one day they may be effectively adopted into everyday life. People do not want to see a car they do not trust on the same roads as them, so the questions are: why don’t people trust them, and how can companies and researchers improve the trustworthiness of the vehicles?
ContributorsShuster, Daniel Nadav (Author) / Berman, Spring (Thesis director) / Cooke, Nancy (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
The quality of life of many people is lowered by impediments to walking ability caused by neurological conditions such as strokes. Since the ankle joint plays an important role in locomotion, it is a common subject of study in rehabilitation research. Robotic devices such as active ankle-foot orthoses and powered exoskeletons have the potential to be used directly in physical therapy or indirectly in research pursuing more effective rehabilitation methods. This paper presents the LiTREAD, a lightweight three degree-of-freedom robotic exoskeletal ankle device. This novel robotic system is designed to be worn on a user's leg and actuate the foot position during treadmill studies. The robot's sagittal plane actuation is complemented by passive virtual axis systems in the frontal and transverse planes. Together, these degrees of freedom allow the device to approximate the full range of motion of the ankle. The virtual axis mechanisms feature locking configurations that will allow the effect of these degrees of freedom on gait dynamics to be studied. Based on a kinematic analysis of the robot's actuation and geometry, it is expected to meet and exceed its torque and speed targets, respectively. The device will fit either leg of a range of subject sizes, and is expected to weigh just 1.3 kg (2.9 lb.). These features and characteristics are designed to minimize the robot's interference with the natural walking motion. Pending validation studies confirming that all design criteria have been met, the LiTREAD prototype that has been constructed will be utilized in various experiments investigating properties of the ankle such as its mechanical impedance. It is hoped that the LiTREAD will yield valuable data that will expand our knowledge of the ankle and aid in the design of future lower-extremity devices.
ContributorsCook, Andrew James Henry (Author) / Lee, Hyunglae (Thesis director) / Artemiadis, Panagiotis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
Description
Interest in Micro Aerial Vehicle (MAV) research has surged over the past decade. MAVs offer new capabilities for intelligence gathering, reconnaissance, site mapping, communications, search and rescue, etc. This thesis discusses key modeling and control aspects of flapping wing MAVs in hover. A three degree of freedom nonlinear model is used to describe the flapping wing vehicle. Averaging theory is used to obtain a nonlinear average model. The equilibrium of this model is then analyzed. A linear model is then obtained to describe the vehicle near hover. LQR is used to as the main control system design methodology. It is used, together with a nonlinear parameter optimization algorithm, to design a family multivariable control system for the MAV. Critical performance trade-offs are illuminated. Properties at both the plant output and input are examined. Very specific rules of thumb are given for control system design. The conservatism of the rules are also discussed. Issues addressed include
What should the control system bandwidth be vis--vis the flapping frequency (so that averaging the nonlinear system is valid)?
When is first order averaging sufficient? When is higher order averaging necessary?
When can wing mass be neglected and when does wing mass become critical to model?
This includes how and when the rules given can be tightened; i.e. made less conservative.
What should the control system bandwidth be vis--vis the flapping frequency (so that averaging the nonlinear system is valid)?
When is first order averaging sufficient? When is higher order averaging necessary?
When can wing mass be neglected and when does wing mass become critical to model?
This includes how and when the rules given can be tightened; i.e. made less conservative.
ContributorsBiswal, Shiba (Author) / Rodriguez, Armando (Thesis advisor) / Mignolet, Marc (Thesis advisor) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2015
Description
Robotic systems are outmatched by the abilities of the human hand to perceive and manipulate the world. Human hands are able to physically interact with the world to perceive, learn, and act to accomplish tasks. Limitations of robotic systems to interact with and manipulate the world diminish their usefulness. In order to advance robot end effectors, specifically artificial hands, rich multimodal tactile sensing is needed. In this work, a multi-articulating, anthropomorphic robot testbed was developed for investigating tactile sensory stimuli during finger-object interactions. The artificial finger is controlled by a tendon-driven remote actuation system that allows for modular control of any tendon-driven end effector and capabilities for both speed and strength. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. Next, attention was focused on real-time artificial perception for decision-making. A robotic system needs to perceive its environment in order to make decisions. Specific actions such as “exploratory procedures” can be employed to classify and characterize object features. Prior work on offline perception was extended to develop an anytime predictive model that returns the probability of having touched a specific feature of an object based on minimally processed sensor data. Developing models for anytime classification of features facilitates real-time action-perception loops. Finally, by combining real-time action-perception with reinforcement learning, a policy was learned to complete a functional contour-following task: closing a deformable ziplock bag. The approach relies only on proprioceptive and localized tactile data. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards within a finite time period by balancing exploration versus exploitation of the action space. Performance of the C-MAB learner was compared to a benchmark Q-learner that eventually returns the optimal policy. To assess robustness and generalizability, the learned policy was tested on variations of the original contour-following task. The work presented contributes to the full range of tools necessary to advance the abilities of artificial hands with respect to dexterity, perception, decision-making, and learning.
ContributorsHellman, Randall Blake (Author) / Santos, Veronica J (Thesis advisor) / Artemiadis, Panagiotis K (Committee member) / Berman, Spring (Committee member) / Helms Tillery, Stephen I (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2016
Description
A robotic swarm can be defined as a large group of inexpensive, interchangeable
robots with limited sensing and/or actuating capabilities that cooperate (explicitly
or implicitly) based on local communications and sensing in order to complete a
mission. Its inherent redundancy provides flexibility and robustness to failures and
environmental disturbances which guarantee the proper completion of the required
task. At the same time, human intuition and cognition can prove very useful in
extreme situations where a fast and reliable solution is needed. This idea led to the
creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate
the human element into the control of robotic swarms for increased robustness and
reliability. The aim of the present work is to extend the current state-of-the-art in HSI
by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),
which has proven to be very useful for people with motor disabilities. At first, a
preliminary investigation about the connection of brain activity and the observation
of swarm collective behaviors is conducted. After showing that such a connection
may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.
The system is based on the combination of motor imagery and the input from a game
controller, while its feasibility is proven through an extensive experimental process.
Finally, speech imagery is proposed as an alternative mental task for BCI applications.
This is done through a series of rigorous experiments and appropriate data analysis.
This work suggests that the integration of BCI principles in HSI applications can be
successful and it can potentially lead to systems that are more intuitive for the users
than the current state-of-the-art. At the same time, it motivates further research in
the area and sets the stepping stones for the potential development of the field of
Brain-Swarm Interfaces (BSI).
robots with limited sensing and/or actuating capabilities that cooperate (explicitly
or implicitly) based on local communications and sensing in order to complete a
mission. Its inherent redundancy provides flexibility and robustness to failures and
environmental disturbances which guarantee the proper completion of the required
task. At the same time, human intuition and cognition can prove very useful in
extreme situations where a fast and reliable solution is needed. This idea led to the
creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate
the human element into the control of robotic swarms for increased robustness and
reliability. The aim of the present work is to extend the current state-of-the-art in HSI
by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),
which has proven to be very useful for people with motor disabilities. At first, a
preliminary investigation about the connection of brain activity and the observation
of swarm collective behaviors is conducted. After showing that such a connection
may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.
The system is based on the combination of motor imagery and the input from a game
controller, while its feasibility is proven through an extensive experimental process.
Finally, speech imagery is proposed as an alternative mental task for BCI applications.
This is done through a series of rigorous experiments and appropriate data analysis.
This work suggests that the integration of BCI principles in HSI applications can be
successful and it can potentially lead to systems that are more intuitive for the users
than the current state-of-the-art. At the same time, it motivates further research in
the area and sets the stepping stones for the potential development of the field of
Brain-Swarm Interfaces (BSI).
ContributorsKaravas, Georgios Konstantinos (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring M. (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2017