Matching Items (2)
Filtering by

Clear all filters

157578-Thumbnail Image.png
Description
Numerous works have addressed the control of multi-robot systems for coverage, mapping, navigation, and task allocation problems. In addition to classical microscopic approaches to multi-robot problems, which model the actions and decisions of individual robots, lately, there has been a focus on macroscopic or Eulerian approaches. In these approaches, the

Numerous works have addressed the control of multi-robot systems for coverage, mapping, navigation, and task allocation problems. In addition to classical microscopic approaches to multi-robot problems, which model the actions and decisions of individual robots, lately, there has been a focus on macroscopic or Eulerian approaches. In these approaches, the population of robots is represented as a continuum that evolves according to a mean-field model, which is directly designed such that the corresponding robot control policies produce target collective behaviours.



This dissertation presents a control-theoretic analysis of three types of mean-field models proposed in the literature for modelling and control of large-scale multi-agent systems, including robotic swarms. These mean-field models are Kolmogorov forward equations of stochastic processes, and their analysis is motivated by the fact that as the number of agents tends to infinity, the empirical measure associated with the agents converges to the solution of these models. Hence, the problem of transporting a swarm of agents from one distribution to another can be posed as a control problem for the forward equation of the process that determines the time evolution of the swarm density.



First, this thesis considers the case in which the agents' states evolve on a finite state space according to a continuous-time Markov chain (CTMC), and the forward equation is an ordinary differential equation (ODE). Defining the agents' task transition rates as the control parameters, the finite-time controllability, asymptotic controllability, and stabilization of the forward equation are investigated. Second, the controllability and stabilization problem for systems of advection-diffusion-reaction partial differential equations (PDEs) is studied in the case where the control parameters include the agents' velocity as well as transition rates. Third, this thesis considers a controllability and optimal control problem for the forward equation in the more general case where the agent dynamics are given by a nonlinear discrete-time control system. Beyond these theoretical results, this thesis also considers numerical optimal transport for control-affine systems. It is shown that finite-volume approximations of the associated PDEs lead to well-posed transport problems on graphs as long as the control system is controllable everywhere.
ContributorsElamvazhuthi, Karthik (Author) / Berman, Spring Melody (Thesis advisor) / Kawski, Matthias (Committee member) / Kuiper, Hendrik (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2019
158221-Thumbnail Image.png
Description
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with

The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.

This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ContributorsBiswal, Shiba (Author) / Berman, Spring (Thesis advisor) / Fainekos, Georgios (Committee member) / Lanchier, Nicolas (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2020