This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 12
Filtering by

Clear all filters

151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
168698-Thumbnail Image.png
Description
Soft continuum robots with the ability to bend, twist, elongate, and shorten, similar to octopus arms, have many potential applications, such as dexterous manipulation and navigation through unstructured, dynamic environments. Novel soft materials such as smart hydrogels, which change volume and other properties in response to stimuli such as temperature,

Soft continuum robots with the ability to bend, twist, elongate, and shorten, similar to octopus arms, have many potential applications, such as dexterous manipulation and navigation through unstructured, dynamic environments. Novel soft materials such as smart hydrogels, which change volume and other properties in response to stimuli such as temperature, pH, and chemicals, can potentially be used to construct soft robots that achieve self-regulated adaptive reconfiguration through on-demand dynamic control of local properties. However, the design of controllers for soft continuum robots is challenging due to their high-dimensional configuration space and the complexity of modeling soft actuator dynamics. To address these challenges, this dissertation presents two different model-based control approaches for robots with distributed soft actuators and sensors and validates the approaches in simulations and physical experiments. It is demonstrated that by choosing an appropriate dynamical model and designing a decentralized controller based on this model, such robots can be controlled to achieve diverse types of complex configurations. The first approach consists of approximating the dynamics of the system, including its actuators, as a linear state-space model in order to apply optimal robust control techniques such as H∞ state-feedback and H∞ output-feedback methods. These techniques are designed to utilize the decentralized control structure of the robot and its distributed sensing and actuation to achieve vibration control and trajectory tracking. The approach is validated in simulation on an Euler-Bernoulli dynamic model of a hydrogel based cantilevered robotic arm and in experiments with a hydrogel-actuated miniature 2-DOF manipulator. The second approach is developed for soft continuum robots with dynamics that can be modeled using Cosserat rod theory. An inverse dynamics control approach is implemented on the Cosserat model of the robot for tracking configurations that include bending, torsion, shear, and extension deformations. The decentralized controller structure facilitates its implementation on robot arms composed of independently-controllable segments that have local sensing and actuation. This approach is validated on simulated 3D robot arms and on an actual silicone robot arm with distributed pneumatic actuation, for which the inverse dynamics problem is solved in simulation and the computed control outputs are applied to the robot in real-time.
ContributorsDoroudchi, Azadeh (Author) / Berman, Spring (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Si, Jennie (Committee member) / Marvi, Hamid (Committee member) / Arizona State University (Publisher)
Created2022
168406-Thumbnail Image.png
Description
Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or

Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or handle variability is an expensive, complex, and time-consuming process. However, with the advent of more complex sensors and algorithms, overcoming these limitations becomes within reach. This work proposes innovations in artificial intelligence, language understanding, and multimodal integration to enable next-generation grasping and manipulation capabilities in autonomous robots. The underlying thesis is that multimodal observations and instructions can drastically expand the responsiveness and dexterity of robot manipulators. Natural language, in particular, can be used to enable intuitive, bidirectional communication between a human user and the machine. To this end, this work presents a system that learns context-aware robot control policies from multimodal human demonstrations. Among the main contributions presented are techniques for (a) collecting demonstrations in an efficient and intuitive fashion, (b) methods for leveraging physical contact with the environment and objects, (c) the incorporation of natural language to understand context, and (d) the generation of robust robot control policies. The presented approach and systems are evaluated in multiple grasping and manipulation settings ranging from dexterous manipulation to pick-and-place, as well as contact-rich bimanual insertion tasks. Moreover, the usability of these innovations, especially when utilizing human task demonstrations and communication interfaces, is evaluated in several human-subject studies.
ContributorsStepputtis, Simon (Author) / Ben Amor, Heni (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Lee, Stefan (Committee member) / Arizona State University (Publisher)
Created2021
193641-Thumbnail Image.png
Description
Human-robot interactions can often be formulated as general-sum differential games where the equilibrial policies are governed by Hamilton-Jacobi-Isaacs (HJI) equations. Solving HJI PDEs faces the curse of dimensionality (CoD). While physics-informed neural networks (PINNs) alleviate CoD in solving PDEs with smooth solutions, they fall short in learning discontinuous solutions due

Human-robot interactions can often be formulated as general-sum differential games where the equilibrial policies are governed by Hamilton-Jacobi-Isaacs (HJI) equations. Solving HJI PDEs faces the curse of dimensionality (CoD). While physics-informed neural networks (PINNs) alleviate CoD in solving PDEs with smooth solutions, they fall short in learning discontinuous solutions due to their sampling nature. This causes PINNs to have poor safety performance when they are applied to approximate values that are discontinuous due to state constraints. This dissertation aims to improve the safety performance of PINN-based value and policy models. The first contribution of the dissertation is to develop learning methods to approximate discontinuous values. Specifically, three solutions are developed: (1) hybrid learning uses both supervisory and PDE losses, (2) value-hardening solves HJIs with increasing Lipschitz constant on the constraint violation penalty, and (3) the epigraphical technique lifts the value to a higher-dimensional state space where it becomes continuous. Evaluations through 5D and 9D vehicle and 13D drone simulations reveal that the hybrid method outperforms others in terms of generalization and safety performance. The second contribution is a learning-theoretical analysis of PINN for value and policy approximation. Specifically, by extending the neural tangent kernel (NTK) framework, this dissertation explores why the choice of activation function significantly affects the PINN generalization performance, and why the inclusion of supervisory costate data improves the safety performance. The last contribution is a series of extensions of the hybrid PINN method to address real-time parameter estimation problems in incomplete-information games. Specifically, a Pontryagin-mode PINN is developed to avoid costly computation for supervisory data. The key idea is the introduction of a costate loss, which is cheap to compute yet effectively enables the learning of important value changes and policies in space-time. Building upon this, a Pontryagin-mode neural operator is developed to achieve state-of-the-art (SOTA) safety performance across a set of differential games with parametric state constraints. This dissertation demonstrates the utility of the resultant neural operator in estimating player constraint parameters during incomplete-information games.
ContributorsZhang, Lei (Author) / Ren, Yi (Thesis advisor) / Si, Jennie (Committee member) / Berman, Spring (Committee member) / Zhang, Wenlong (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2024
193572-Thumbnail Image.png
Description
Learning longer-horizon tasks is challenging with techniques such as reinforcement learning and behavior cloning. Previous approaches have split these long tasks into shorter tasks that are easier to learn by using statistical change point detection methods. However, classical changepoint detection methods function only with low-dimensional robot trajectory data and not

Learning longer-horizon tasks is challenging with techniques such as reinforcement learning and behavior cloning. Previous approaches have split these long tasks into shorter tasks that are easier to learn by using statistical change point detection methods. However, classical changepoint detection methods function only with low-dimensional robot trajectory data and not with high-dimensional inputs such as vision. In this thesis, I have split a long horizon tasks, represented by trajectories into short-horizon sub-tasks with the supervision of language. These shorter horizon tasks can be learned using conventional behavior cloning approaches. I found comparisons between the techniques from the video moment retrieval problem and changepoint detection in robot trajectory data consisting of high-dimensional data. The proposed moment retrieval-based approach shows a more than 30% improvement in mean average precision (mAP) for identifying trajectory sub-tasks with language guidance compared to that without language. Several ablations are performed to understand the effects of domain randomization, sample complexity, views, and sim-to-real transfer of this method. The data ablation shows that just with a 100 labeled trajectories a 42.01 mAP can be achieved, demonstrating the sample efficiency of using such an approach. Further, behavior cloning models trained on the segmented trajectories outperform a single model trained on the whole trajectory by up to 20%.
ContributorsRaj, Divyanshu (Author) / Gopalan, Nakul (Thesis advisor) / Baral, Chitta (Committee member) / Senanayake, Ransalu (Committee member) / Arizona State University (Publisher)
Created2024
156988-Thumbnail Image.png
Description
Unmanned aerial vehicles (UAVs) are widely used in many applications because of their small size, great mobility and hover performance. This has been a consequence of the fast development of electronics, cheap lightweight flight controllers for accurate positioning and cameras. This thesis describes modeling, control and design of an oblique-cross-quadcopter

Unmanned aerial vehicles (UAVs) are widely used in many applications because of their small size, great mobility and hover performance. This has been a consequence of the fast development of electronics, cheap lightweight flight controllers for accurate positioning and cameras. This thesis describes modeling, control and design of an oblique-cross-quadcopter platform for indoor-environments.

One contribution of the work was the design of a new printed-circuit-board (PCB) flight controller (called MARK3). Key features/capabilities are as follows:

(1) a Teensy 3.2 microcontroller with 168MHz overclock –used for communications, full-state estimation and inner-outer loop hierarchical rate-angle-speed-position control,

(2) an on-board MEMS inertial-measurement-unit (IMU) which includes an LSM303D (3DOF-accelerometer and magnetometer), an L3GD20 (3DOF-gyroscope) and a BMP180 (barometer) for attitude estimation (barometer/magnetometer not used),

(3) 6 pulse-width-modulator (PWM) output pins supports up to 6 rotors

(4) 8 PWM input pins support up to 8-channel 2.4 GHz transmitter/receiver for manual control,

(5) 2 5V servo extension outputs for other requirements (e.g. gimbals),

(6) 2 universal-asynchronous-receiver-transmitter (UART) serial ports - used by flight controller to process data from Xbee; can be used for accepting outer-loop position commands from NVIDIA TX2 (future work),

(7) 1 I2C-serial-protocol two-wire port for additional modules (used to read data from IMU at 400 Hz),

(8) a 20-pin port for Xbee telemetry module connection; permits Xbee transceiver on desktop PC to send position/attitude commands to Xbee transceiver on quadcopter.

The quadcopter platform consists of the new MARK3 PCB Flight Controller, an ATG-250 carbon-fiber frame (250 mm), a DJI Snail propulsion-system (brushless-three-phase-motor, electronic-speed-controller (ESC) and propeller), an HTC VIVE Tracker and RadioLink R9DS 9-Channel 2.4GHz Receiver. This platform is completely compatible with the HTC VIVE Tracking System (HVTS) which has 7ms latency, submillimeter accuracy and a much lower price compared to other millimeter-level tracking systems.

The thesis describes nonlinear and linear modeling of the quadcopter’s 6DOF rigid-body dynamics and brushless-motor-actuator dynamics. These are used for hierarchical-classical-control-law development near hover. The HVTS was used to demonstrate precision hover-control and path-following. Simulation and measured flight-data are shown to be similar. This work provides a foundation for future precision multi-quadcopter formation-flight-control.
ContributorsLu, Shi (Author) / Rodriguez, Armando A. (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2018
154029-Thumbnail Image.png
Description
Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design and control objectives for ground vehicles. One central objective was to show how off-the-shelf (low-cost) remote-control (RC) “toy” vehicles can be converted into intelligent multi-capability

Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design and control objectives for ground vehicles. One central objective was to show how off-the-shelf (low-cost) remote-control (RC) “toy” vehicles can be converted into intelligent multi-capability robotic-platforms for conducting FAME research. This is shown for two vehicle classes: (1) six differential-drive (DD) RC vehicles called Thunder Tumbler (DDTT) and (2) one rear-wheel drive (RWD) RC car called Ford F-150 (1:14 scale). Each DDTT-vehicle was augmented to provide a substantive suite of capabilities as summarized below (It should be noted, however, that only one DDTT-vehicle was augmented with an inertial measurement unit (IMU) and 2.4 GHz RC capability): (1) magnetic wheel-encoders/IMU for(dead-reckoning-based) inner-loop speed-control and outer-loop position-directional-control, (2) Arduino Uno microcontroller-board for encoder-based inner-loop speed-control and encoder-IMU-ultrasound-based outer-loop cruise-position-directional-separation-control, (3) Arduino motor-shield for inner-loop motor-speed-control, (4)Raspberry Pi II computer-board for demanding outer-loop vision-based cruise- position-directional-control, (5) Raspberry Pi 5MP camera for outer-loop cruise-position-directional-control (exploiting WiFi to send video back to laptop), (6) forward-pointing ultrasonic distance/rangefinder sensor for outer-loop separation-control, and (7) 2.4 GHz spread-spectrum RC capability to replace original 27/49 MHz RC. Each “enhanced”/ augmented DDTT-vehicle costs less than 􀀀175 but offers the capability of commercially available vehicles costing over 􀀀500. Both the Arduino and Raspberry are low-cost, well-supported (software wise) and easy-to-use. For the vehicle classes considered (i.e. DD, RWD), both kinematic and dynamical (planar xy) models are examined. Suitable nonlinear/linear-models are used to develop inner/outer-loopcontrol laws.

All demonstrations presented involve enhanced DDTT-vehicles; one the F-150; one a quadrotor. The following summarizes key hardware demonstrations: (1) cruise-control along line, (2) position-control along line (3) position-control along curve (4) planar (xy) Cartesian stabilization, (5) cruise-control along jagged line/curve, (6) vehicle-target spacing-control, (7) multi-robot spacing-control along line/curve, (8) tracking slowly-moving remote-controlled quadrotor, (9) avoiding obstacle while moving toward target, (10) RC F-150 followed by DDTT-vehicle. Hardware data/video is compared with, and corroborated by, model-based simulations. In short, many capabilities that are critical for reaching the longer-term FAME goal are demonstrated.
ContributorsLin, Zhenyu (Author) / Rodriguez, Armando Antonio (Committee member) / Si, Jennie (Committee member) / Berman, Spring Melody (Committee member) / Arizona State University (Publisher)
Created2015
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
157990-Thumbnail Image.png
Description
As robots become mechanically more capable, they are going to be more and more integrated into our daily lives. Over time, human’s expectation of what the robot capabilities are is getting higher. Therefore, it can be conjectured that often robots will not act as human commanders intended them to do.

As robots become mechanically more capable, they are going to be more and more integrated into our daily lives. Over time, human’s expectation of what the robot capabilities are is getting higher. Therefore, it can be conjectured that often robots will not act as human commanders intended them to do. That is, the users of the robots may have a different point of view from the one the robots do.

The first part of this dissertation covers methods that resolve some instances of this mismatch when the mission requirements are expressed in Linear Temporal Logic (LTL) for handling coverage, sequencing, conditions and avoidance. That is, the following general questions are addressed:

* What cause of the given mission is unrealizable?

* Is there any other feasible mission that is close to the given one?

In order to answer these questions, the LTL Revision Problem is applied and it is formulated as a graph search problem. It is shown that in general the problem is NP-Complete. Hence, it is proved that the heuristic algorihtm has 2-approximation bound in some cases. This problem, then, is extended to two different versions: one is for the weighted transition system and another is for the specification under quantitative preference. Next, a follow up question is addressed:

* How can an LTL specified mission be scaled up to multiple robots operating in confined environments?

The Cooperative Multi-agent Planning Problem is addressed by borrowing a technique from cooperative pathfinding problems in discrete grid environments. Since centralized planning for multi-robot systems is computationally challenging and easily results in state space explosion, a distributed planning approach is provided through agent coupling and de-coupling.

In addition, in order to make such robot missions work in the real world, robots should take actions in the continuous physical world. Hence, in the second part of this thesis, the resulting motion planning problems is addressed for non-holonomic robots.

That is, it is devoted to autonomous vehicles’ motion planning in challenging environments such as rural, semi-structured roads. This planning problem is solved with an on-the-fly hierarchical approach, using a pre-computed lattice planner. It is also proved that the proposed algorithm guarantees resolution-completeness in such demanding environments. Finally, possible extensions are discussed.
ContributorsKim, Kangjin (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Lee, Joohyung (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2019
Description
Vertical take-off and landing (VTOL) systems have become a crucial component of aeronautical and commercial applications alike. Quadcopter systems are rather convenient to analyze and design controllers for, owing to symmetry in body dynamics. In this work, a quadcopter model at hover equilibrium is derived, using both high and low

Vertical take-off and landing (VTOL) systems have become a crucial component of aeronautical and commercial applications alike. Quadcopter systems are rather convenient to analyze and design controllers for, owing to symmetry in body dynamics. In this work, a quadcopter model at hover equilibrium is derived, using both high and low level control. The low level control system is designed to track reference Euler angles (roll, pitch and yaw) as shown in previous work [1],[2]. The high level control is designed to track reference X, Y, and Z axis states [3]. The objective of this paper is to model, design and simulate platooning (separation) control for a fleet of 6 quadcopter units, each comprising of high and low level control systems, using a leader-follower approach. The primary motivation of this research is to examine the ”accordion effect”, a phenomenon observed in leader-follower systems due to which positioning or spacing errors arise in follower vehicles due to sudden changes in lead vehicle velocity. It is proposed that the accordion effect occurs when lead vehicle information is not directly communicated with the rest of the system [4][5] . In this paper, the effect of leader acceleration feedback is observed for the quadcopter platoon. This is performed by first designing a classical platoon controller for a nominal case, where communication within the system is purely ad-hoc (i.e from one quadcopter to it’s immediate successor in the fleet). Steady state separation/positioning errors for each member of the fleet are observed and documented during simulation. Following this analysis, lead vehicle acceleration is provided to the controller (as a feed forward term), to observe the extent of it’s effect on steady state separation, specifically along tight maneuvers. Thus the key contribution of this work is a controller that stabilizes a platoon of quadcopters in the presence of the accordion effect, when employing a leader-follower approach. The modeling shown in this paper builds on previous research to design a low costquadcopter platform, the Mark 3 copter [1]. Prior to each simulation, model nonlinearities and hardware constants are measured or derived from the Mark 3 model, in an effort to observe the working of the system in the presence of realistic hardware constraints. The system is designed in compliance with Robot Operating System (ROS) and the Micro Air Vehicle Link (MAVLINK) communication protocol.
ContributorsSrinivasan, Anshuman (Author) / Rodriguez, Armando A. (Thesis advisor) / Si, Jennie (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2021