Matching Items (6)
Filtering by

Clear all filters

ContributorsFisher, Caleb (Author) / Lee, Hyunglae (Thesis director) / Olivas, Alyssa (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05
164917-Thumbnail Image.png
Description

This paper discusses the process of creating and testing the haptic feedback wearable that utilizes a sweeping Light Detection and Ranging sensor. This design comes as an extension to the capstone project for electrical engineers. The design works by attaching a LiDAR sensor to a sweeping servo motor, and whenever

This paper discusses the process of creating and testing the haptic feedback wearable that utilizes a sweeping Light Detection and Ranging sensor. This design comes as an extension to the capstone project for electrical engineers. The design works by attaching a LiDAR sensor to a sweeping servo motor, and whenever an object is detected by the sensor, a motor will vibrate to notify the user that an object is nearby. The design incorporates four motors so that the user will have a sense of where an obstacle is coming from and be able to navigate around that obstacle. The design was tested for its accuracy in distance and angle measurement, its efficiency when it came to processing the data, and the uncertainty of the sensor due to beam spreading. Plotting the results for the distance and angle accuracy showed that the design is capable of accurate measurements. The implementation of the code was also very efficient and had no issues with latency when processing the data from the sensor. There was also uncertainty at the larger ranges for the sensor.

ContributorsKim, Arthur (Author) / Jayasuriya, Suren (Thesis director) / Lewis, John (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05
154029-Thumbnail Image.png
Description
Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design and control objectives for ground vehicles. One central objective was to show how off-the-shelf (low-cost) remote-control (RC) “toy” vehicles can be converted into intelligent multi-capability

Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design and control objectives for ground vehicles. One central objective was to show how off-the-shelf (low-cost) remote-control (RC) “toy” vehicles can be converted into intelligent multi-capability robotic-platforms for conducting FAME research. This is shown for two vehicle classes: (1) six differential-drive (DD) RC vehicles called Thunder Tumbler (DDTT) and (2) one rear-wheel drive (RWD) RC car called Ford F-150 (1:14 scale). Each DDTT-vehicle was augmented to provide a substantive suite of capabilities as summarized below (It should be noted, however, that only one DDTT-vehicle was augmented with an inertial measurement unit (IMU) and 2.4 GHz RC capability): (1) magnetic wheel-encoders/IMU for(dead-reckoning-based) inner-loop speed-control and outer-loop position-directional-control, (2) Arduino Uno microcontroller-board for encoder-based inner-loop speed-control and encoder-IMU-ultrasound-based outer-loop cruise-position-directional-separation-control, (3) Arduino motor-shield for inner-loop motor-speed-control, (4)Raspberry Pi II computer-board for demanding outer-loop vision-based cruise- position-directional-control, (5) Raspberry Pi 5MP camera for outer-loop cruise-position-directional-control (exploiting WiFi to send video back to laptop), (6) forward-pointing ultrasonic distance/rangefinder sensor for outer-loop separation-control, and (7) 2.4 GHz spread-spectrum RC capability to replace original 27/49 MHz RC. Each “enhanced”/ augmented DDTT-vehicle costs less than 􀀀175 but offers the capability of commercially available vehicles costing over 􀀀500. Both the Arduino and Raspberry are low-cost, well-supported (software wise) and easy-to-use. For the vehicle classes considered (i.e. DD, RWD), both kinematic and dynamical (planar xy) models are examined. Suitable nonlinear/linear-models are used to develop inner/outer-loopcontrol laws.

All demonstrations presented involve enhanced DDTT-vehicles; one the F-150; one a quadrotor. The following summarizes key hardware demonstrations: (1) cruise-control along line, (2) position-control along line (3) position-control along curve (4) planar (xy) Cartesian stabilization, (5) cruise-control along jagged line/curve, (6) vehicle-target spacing-control, (7) multi-robot spacing-control along line/curve, (8) tracking slowly-moving remote-controlled quadrotor, (9) avoiding obstacle while moving toward target, (10) RC F-150 followed by DDTT-vehicle. Hardware data/video is compared with, and corroborated by, model-based simulations. In short, many capabilities that are critical for reaching the longer-term FAME goal are demonstrated.
ContributorsLin, Zhenyu (Author) / Rodriguez, Armando Antonio (Committee member) / Si, Jennie (Committee member) / Berman, Spring Melody (Committee member) / Arizona State University (Publisher)
Created2015
158834-Thumbnail Image.png
Description
One potential application of multi-robot systems is collective transport, a task in which multiple mobile robots collaboratively transport a payload that is too large or heavy to be carried by a single robot. Numerous control schemes have been proposed for collective transport in environments where robots can localize themselves (e.g.,

One potential application of multi-robot systems is collective transport, a task in which multiple mobile robots collaboratively transport a payload that is too large or heavy to be carried by a single robot. Numerous control schemes have been proposed for collective transport in environments where robots can localize themselves (e.g., using GPS) and communicate with one another, have information about the payload's geometric and dynamical properties, and follow predefined robot and/or payload trajectories. However, these approaches cannot be applied in uncertain environments where robots do not have reliable communication and GPS and lack information about the payload. These conditions characterize a variety of applications, including construction, mining, assembly in space and underwater, search-and-rescue, and disaster response.
Toward this end, this thesis presents decentralized control strategies for collective transport by robots that regulate their actions using only their local sensor measurements and minimal prior information. These strategies can be implemented on robots that have limited or absent localization capabilities, do not explicitly exchange information, and are not assigned predefined trajectories. The controllers are developed for collective transport over planar surfaces, but can be extended to three-dimensional environments.

This thesis addresses the above problem for two control objectives. First, decentralized controllers are proposed for velocity control of collective transport, in which the robots must transport a payload at a constant velocity through an unbounded domain that may contain strictly convex obstacles. The robots are provided only with the target transport velocity, and they do not have global localization or prior information about any obstacles in the environment. Second, decentralized controllers are proposed for position control of collective transport, in which the robots must transport a payload to a target position through a bounded or unbounded domain that may contain convex obstacles. The robots are subject to the same constraints as in the velocity control scenario, except that they are assumed to have global localization. Theoretical guarantees for successful execution of the task are derived using techniques from nonlinear control theory, and it is shown through simulations and physical robot experiments that the transport objectives are achieved with the proposed controllers.
ContributorsFarivarnejad, Hamed (Author) / Berman, Spring (Thesis advisor) / Mignolet, Marc (Committee member) / Tsakalis, Konstantinos (Committee member) / Artemiadis, Panagiotis (Committee member) / Gil, Stephanie (Committee member) / Arizona State University (Publisher)
Created2020
187627-Thumbnail Image.png
Description
Aviation is a complicated field that involves a wide range of operations, from commercial airline flights to Unmanned Aerial Systems (UAS). Planning and scheduling are essential components in the aviation industry that play a significant role in ensuring safe and efficient operations. Reinforcement Learning (RL) has received increasing attention in

Aviation is a complicated field that involves a wide range of operations, from commercial airline flights to Unmanned Aerial Systems (UAS). Planning and scheduling are essential components in the aviation industry that play a significant role in ensuring safe and efficient operations. Reinforcement Learning (RL) has received increasing attention in recent years due to its capability to enable autonomous decision-making. To investigate the potential advantages and effectiveness of RL in aviation planning and scheduling, three topics are explored in-depth, including obstacle avoidance, task-oriented path planning, and maintenance scheduling. A dynamic and probabilistic airspace reservation concept, called Dynamic Anisotropic (DA) bound, is first developed for UAS, which can be added around the UAS as the separation requirement. A model based on Q-leaning is proposed to integrate DA bound with path planning for obstacle avoidance. Moreover, A deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) is proposed to guide the UAS to destinations while avoiding obstacles through continuous control. Results from case studies demonstrate that the proposed model can provide accurate and robust guidance and resolve conflict with a success rate of over 99%. Next, the single-UAS path planning problem is extended to a multi-agent system where agents aim to accomplish their own complex tasks. These tasks involve non-Markovian reward functions and can be specified using reward machines. Both cooperative and competitive environments are explored. Decentralized Graph-based reinforcement learning using Reward Machines (DGRM) is proposed to improve computational efficiency for maximizing the global reward in a graph-based Markov Decision Process (MDP). Q-learning with Reward Machines for Stochastic Games (QRM-SG) is developed to learn the best-response strategy for each agent in a competitive environment. Furthermore, maintenance scheduling is investigated. The purpose is to minimize the system maintenance cost while ensuring compliance with reliability requirements. Maintenance scheduling is formulated as an MDP and determines when and what maintenance operations to conduct. A Linear Programming-enhanced RollouT (LPRT) method is developed to solve both constrained deterministic and stochastic maintenance scheduling with an infinite horizon. LPRT categorizes components according to their health condition and makes decisions for each category.
ContributorsHu, Jueming (Author) / Liu, Yongming YL (Thesis advisor) / Yan, Hao HY (Committee member) / Lee, Hyunglae HL (Committee member) / Zhang, Wenlong WZ (Committee member) / Xu, Zhe ZX (Committee member) / Arizona State University (Publisher)
Created2023
155933-Thumbnail Image.png
Description
The Doghouse Plot visually represents an aircraft’s performance during combined turn-climb maneuvers. The Doghouse Plot completely describes the turn-climb capability of an aircraft; a single plot demonstrates the relationship between climb performance, turn rate, turn radius, stall margin, and bank angle. Using NASA legacy codes, Empirical Drag Estimation Technique (EDET)

The Doghouse Plot visually represents an aircraft’s performance during combined turn-climb maneuvers. The Doghouse Plot completely describes the turn-climb capability of an aircraft; a single plot demonstrates the relationship between climb performance, turn rate, turn radius, stall margin, and bank angle. Using NASA legacy codes, Empirical Drag Estimation Technique (EDET) and Numerical Propulsion System Simulation (NPSS), it is possible to reverse engineer sufficient basis data for commercial and military aircraft to construct Doghouse Plots. Engineers and operators can then use these to assess their aircraft’s full performance envelope. The insight gained from these plots can broaden the understanding of an aircraft’s performance and, in turn, broaden the operational scope of some aircraft that would otherwise be limited by the simplifications found in their Airplane Flight Manuals (AFM). More importantly, these plots can build on the current standards of obstacle avoidance and expose risks in operation.
ContributorsWilson, John Robert (Author) / Takahashi, Timothy T (Thesis advisor) / Middleton, James (Committee member) / White, Daniel (Committee member) / Arizona State University (Publisher)
Created2017