Matching Items (2)
Filtering by

Clear all filters

191489-Thumbnail Image.png
Description
This dissertation discusses continuous-time reinforcement learning (CT-RL) for control of affine nonlinear systems. Continuous-time nonlinear optimal control problems hold great promise in real-world applications. After decades of development, reinforcement learning (RL) has achieved some of the greatest successes as a general nonlinear control design method. Yet as RL control has

This dissertation discusses continuous-time reinforcement learning (CT-RL) for control of affine nonlinear systems. Continuous-time nonlinear optimal control problems hold great promise in real-world applications. After decades of development, reinforcement learning (RL) has achieved some of the greatest successes as a general nonlinear control design method. Yet as RL control has developed, CT-RL results have greatly lagged their discrete-time RL (DT-RL) counterparts, especially in regards to real-world applications. Current CT-RL algorithms generally fall into two classes: adaptive dynamic programming (ADP), and actor-critic deep RL (DRL). The first school of ADP methods features elegant theoretical results stemming from adaptive and optimal control. Yet, they have not been shown effectively synthesizing meaningful controllers. The second school of DRL has shown impressive learning solutions, yet theoretical guarantees are still to be developed. A substantive analysis uncovering the quantitative causes of the fundamental gap between CT and DT remains to be conducted. Thus, this work develops a first-of-its kind quantitative evaluation framework to diagnose the performance limitations of the leading CT-RL methods. This dissertation also introduces a suite of new CT-RL algorithms which offers both theoretical and synthesis guarantees. The proposed design approach relies on three important factors. First, for physical systems that feature physically-motivated dynamical partitions into distinct loops, the proposed decentralization method breaks the optimal control problem into smaller subproblems. Second, the work introduces a new excitation framework to improve persistence of excitation (PE) and numerical conditioning via classical input/output insights. Third, the method scales the learning problem via design-motivated invertible transformations of the system state variables in order to modulate the algorithm learning regression for further increases in numerical stability. This dissertation introduces a suite of (decentralized) excitable integral reinforcement learning (EIRL) algorithms implementing these paradigms. It rigorously proves convergence, optimality, and closed-loop stability guarantees of the proposed methods, which are demonstrated in comprehensive comparative studies with the leading methods in ADP on a significant application problem of controlling an unstable, nonminimum phase hypersonic vehicle (HSV). It also conducts comprehensive comparative studies with the leading DRL methods on three state-of-the-art (SOTA) environments, revealing new performance/design insights.
ContributorsWallace, Brent Abraham (Author) / Si, Jennie (Thesis advisor) / Berman, Spring M (Committee member) / Bertsekas, Dimitri P (Committee member) / Tsakalis, Konstantinos S (Committee member) / Arizona State University (Publisher)
Created2024
153297-Thumbnail Image.png
Description
This thesis considers two problems in the control of robotic swarms. Firstly, it addresses a trajectory planning and task allocation problem for a swarm of resource-constrained robots that cannot localize or communicate with each other and that exhibit stochasticity in their motion and task switching policies. We model the population

This thesis considers two problems in the control of robotic swarms. Firstly, it addresses a trajectory planning and task allocation problem for a swarm of resource-constrained robots that cannot localize or communicate with each other and that exhibit stochasticity in their motion and task switching policies. We model the population dynamics of the robotic swarm as a set of advection-diffusion- reaction (ADR) partial differential equations (PDEs).

Specifically, we consider a linear parabolic PDE model that is bilinear in the robots' velocity and task-switching rates. These parameters constitute a set of time-dependent control variables that can be optimized and transmitted to the robots prior to their deployment or broadcasted in real time. The planning and allocation problem can then be formulated as a PDE-constrained optimization problem, which we solve using techniques from optimal control. Simulations of a commercial pollination scenario validate the ability of our control approach to drive a robotic swarm to achieve predefined spatial distributions of activity over a closed domain, which may contain obstacles. Secondly, we consider a mapping problem wherein a robotic swarm is deployed over a closed domain and it is necessary to reconstruct the unknown spatial distribution of a feature of interest. The ADR-based primitives result in a coefficient identification problem for the corresponding system of PDEs. To deal with the inherent ill-posedness of the problem, we frame it as an optimization problem. We validate our approach through simulations and show that reconstruction of the spatially-dependent coefficient can be achieved with considerable accuracy using temporal information alone.
ContributorsElamvazhuthi, Karthik (Author) / Berman, Spring Melody (Thesis advisor) / Peet, Matthew Monnig (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2014