2024-04-18T11:55:06Zhttps://keep.lib.asu.edu/oai/requestoai:keep.lib.asu.edu:node-1543492021-08-30T18:24:43Zoai_pmh:all154349
https://hdl.handle.net/2286/R.I.38411
http://rightsstatements.org/vocab/InC/1.0/
All Rights Reserved
2016
xiv, 208 pages : illustrations (some color)
Doctoral Dissertation
Academic theses
Text
eng
Kamyar, Reza
Peet, Matthew
Berman, Spring
Rivera, Daniel
Artemiadis, Panagiotis
Fainekos, Georgios
Arizona State University
Partial requirement for: Ph.D., Arizona State University, 2016
Includes bibliographical references (pages 198-208)
Field of study: Mechanical engineering
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems - in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) - whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers - machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers.<br/><br/>We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.
Mechanical Engineering
Mathematics
energy
Convex Optimization
Lyapunov theory
Optimal energy storage
Parallel Computing
Polynomial optimization
stability analysis
Control Theory
Parallel processing (Electronic computers)
Mathematical optimization
Polynomials
Stability--Mathematical models.
stability
Parallel optimization of polynomials for large-scale problems in stability and control