This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 2 of 2
Filtering by

Clear all filters

158796-Thumbnail Image.png
Description
Daily collaborative tasks like pushing a table or a couch require haptic communication between the people doing the task. To design collaborative motion planning algorithms for such applications, it is important to understand human behavior. Collaborative tasks involve continuous adaptations and intent recognition between the people involved in the task.

Daily collaborative tasks like pushing a table or a couch require haptic communication between the people doing the task. To design collaborative motion planning algorithms for such applications, it is important to understand human behavior. Collaborative tasks involve continuous adaptations and intent recognition between the people involved in the task. This thesis explores the coordination between the human-partners through a virtual setup involving continuous visual feedback. The interaction and coordination are modeled as a two-step process: 1) Collecting data for a collaborative couch-pushing task, where both the people doing the task have complete information about the goal but are unaware of each other's cost functions or intentions and 2) processing the emergent behavior from complete information and fitting a model for this behavior to validate a mathematical model of agent-behavior in multi-agent collaborative tasks. The baseline model is updated using different approaches to resemble the trajectories generated by these models to human trajectories. All these models are compared to each other. The action profiles of both the agents and the position and velocity of the manipulated object during a goal-oriented task is recorded and used as expert-demonstrations to fit models resembling human behaviors. Analysis through hypothesis teasing is also performed to identify the difference in behaviors when there are complete information and information asymmetry among agents regarding the goal position.
ContributorsShintre, Pallavi Shrinivas (Author) / Zhang, Wenlong (Thesis advisor) / Si, Jennie (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2020
193641-Thumbnail Image.png
Description
Human-robot interactions can often be formulated as general-sum differential games where the equilibrial policies are governed by Hamilton-Jacobi-Isaacs (HJI) equations. Solving HJI PDEs faces the curse of dimensionality (CoD). While physics-informed neural networks (PINNs) alleviate CoD in solving PDEs with smooth solutions, they fall short in learning discontinuous solutions due

Human-robot interactions can often be formulated as general-sum differential games where the equilibrial policies are governed by Hamilton-Jacobi-Isaacs (HJI) equations. Solving HJI PDEs faces the curse of dimensionality (CoD). While physics-informed neural networks (PINNs) alleviate CoD in solving PDEs with smooth solutions, they fall short in learning discontinuous solutions due to their sampling nature. This causes PINNs to have poor safety performance when they are applied to approximate values that are discontinuous due to state constraints. This dissertation aims to improve the safety performance of PINN-based value and policy models. The first contribution of the dissertation is to develop learning methods to approximate discontinuous values. Specifically, three solutions are developed: (1) hybrid learning uses both supervisory and PDE losses, (2) value-hardening solves HJIs with increasing Lipschitz constant on the constraint violation penalty, and (3) the epigraphical technique lifts the value to a higher-dimensional state space where it becomes continuous. Evaluations through 5D and 9D vehicle and 13D drone simulations reveal that the hybrid method outperforms others in terms of generalization and safety performance. The second contribution is a learning-theoretical analysis of PINN for value and policy approximation. Specifically, by extending the neural tangent kernel (NTK) framework, this dissertation explores why the choice of activation function significantly affects the PINN generalization performance, and why the inclusion of supervisory costate data improves the safety performance. The last contribution is a series of extensions of the hybrid PINN method to address real-time parameter estimation problems in incomplete-information games. Specifically, a Pontryagin-mode PINN is developed to avoid costly computation for supervisory data. The key idea is the introduction of a costate loss, which is cheap to compute yet effectively enables the learning of important value changes and policies in space-time. Building upon this, a Pontryagin-mode neural operator is developed to achieve state-of-the-art (SOTA) safety performance across a set of differential games with parametric state constraints. This dissertation demonstrates the utility of the resultant neural operator in estimating player constraint parameters during incomplete-information games.
ContributorsZhang, Lei (Author) / Ren, Yi (Thesis advisor) / Si, Jennie (Committee member) / Berman, Spring (Committee member) / Zhang, Wenlong (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2024