This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 4 of 4
Filtering by

Clear all filters

155798-Thumbnail Image.png
Description
Robotic joints can be either powered or passive. This work will discuss the creation of a passive and a powered joint system as well as the combination system being both powered and passive along with its benefits. A novel approach of analysis and control of the combination system

Robotic joints can be either powered or passive. This work will discuss the creation of a passive and a powered joint system as well as the combination system being both powered and passive along with its benefits. A novel approach of analysis and control of the combination system is presented.

A passive and a powered ankle joint system is developed and fit to the field of prosthetics, specifically ankle joint replacement for able bodied gait. The general 1 DOF robotic joint designs are examined and the results from testing are discussed. Achievements in this area include the able bodied gait like behavior of passive systems for slow walking speeds. For higher walking speeds the powered ankle system is capable of adding the necessary energy to propel the user forward and remain similar to able bodied gait, effectively replacing the calf muscle. While running has not fully been achieved through past powered ankle devices the full power necessary is reached in this work for running and sprinting while achieving 4x’s power amplification through the powered ankle mechanism.

A theoretical approach to robotic joints is then analyzed in order to combine the advantages of both passive and powered systems. Energy methods are shown to provide a correct behavioral analysis of any robotic joint system. Manipulation of the energy curves and mechanism coupler curves allows real time joint behavioral adjustment. Such a powered joint can be adjusted to passively achieve desired behavior for different speeds and environmental needs. The effects on joint moment and stiffness from adjusting one type of mechanism is presented.
ContributorsHolgate, Robert (Author) / Sugar, Thomas (Thesis advisor) / Artemiades, Panagiotis (Thesis advisor) / Berman, Spring (Committee member) / Mignolet, Marc (Committee member) / Davidson, Joseph (Committee member) / Arizona State University (Publisher)
Created2017
158834-Thumbnail Image.png
Description
One potential application of multi-robot systems is collective transport, a task in which multiple mobile robots collaboratively transport a payload that is too large or heavy to be carried by a single robot. Numerous control schemes have been proposed for collective transport in environments where robots can localize themselves (e.g.,

One potential application of multi-robot systems is collective transport, a task in which multiple mobile robots collaboratively transport a payload that is too large or heavy to be carried by a single robot. Numerous control schemes have been proposed for collective transport in environments where robots can localize themselves (e.g., using GPS) and communicate with one another, have information about the payload's geometric and dynamical properties, and follow predefined robot and/or payload trajectories. However, these approaches cannot be applied in uncertain environments where robots do not have reliable communication and GPS and lack information about the payload. These conditions characterize a variety of applications, including construction, mining, assembly in space and underwater, search-and-rescue, and disaster response.
Toward this end, this thesis presents decentralized control strategies for collective transport by robots that regulate their actions using only their local sensor measurements and minimal prior information. These strategies can be implemented on robots that have limited or absent localization capabilities, do not explicitly exchange information, and are not assigned predefined trajectories. The controllers are developed for collective transport over planar surfaces, but can be extended to three-dimensional environments.

This thesis addresses the above problem for two control objectives. First, decentralized controllers are proposed for velocity control of collective transport, in which the robots must transport a payload at a constant velocity through an unbounded domain that may contain strictly convex obstacles. The robots are provided only with the target transport velocity, and they do not have global localization or prior information about any obstacles in the environment. Second, decentralized controllers are proposed for position control of collective transport, in which the robots must transport a payload to a target position through a bounded or unbounded domain that may contain convex obstacles. The robots are subject to the same constraints as in the velocity control scenario, except that they are assumed to have global localization. Theoretical guarantees for successful execution of the task are derived using techniques from nonlinear control theory, and it is shown through simulations and physical robot experiments that the transport objectives are achieved with the proposed controllers.
ContributorsFarivarnejad, Hamed (Author) / Berman, Spring (Thesis advisor) / Mignolet, Marc (Committee member) / Tsakalis, Konstantinos (Committee member) / Artemiadis, Panagiotis (Committee member) / Gil, Stephanie (Committee member) / Arizona State University (Publisher)
Created2020
158221-Thumbnail Image.png
Description
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with

The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.

This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ContributorsBiswal, Shiba (Author) / Berman, Spring (Thesis advisor) / Fainekos, Georgios (Committee member) / Lanchier, Nicolas (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2020
168490-Thumbnail Image.png
Description
Modern life is full of challenging optimization problems that we unknowingly attempt to solve. For instance, a common dilemma often encountered is the decision of picking a parking spot while trying to minimize both the distance to the goal destination and time spent searching for parking; one strategy is to

Modern life is full of challenging optimization problems that we unknowingly attempt to solve. For instance, a common dilemma often encountered is the decision of picking a parking spot while trying to minimize both the distance to the goal destination and time spent searching for parking; one strategy is to drive as close as possible to the goal destination but risk a penalty cost if no parking spaces can be found. Optimization problems of this class all have underlying time-varying processes that can be altered by a decision/input to minimize some cost. Such optimization problems are commonly solved by a class of methods called Dynamic Programming (DP) that breaks down a complex optimization problem into a simpler family of sub-problems. In the 1950s Richard Bellman introduced a class of DP methods that broke down Multi-Stage Optimization Problems (MSOP) into a nested sequence of ``tail problems”. Bellman showed that for any MSOP with a cost function that satisfies a condition called additive separability, the solution to the tail problem of the MSOP initialized at time-stage k>0 can be used to solve the tail problem initialized at time-stage k-1. Therefore, by recursively solving each tail problem of the MSOP, a solution to the original MSOP can be found. This dissertation extends Bellman`s theory to a broader class of MSOPs involving non-additively separable costs by introducing a new state augmentation solution method and generalizing the Bellman Equation. This dissertation also considers the analogous continuous-time counterpart to discrete-time MSOPs, called Optimal Control Problems (OCPs). OCPs can be solved by solving a nonlinear Partial Differential Equation (PDE) called the Hamilton-Jacobi-Bellman (HJB) PDE. Unfortunately, it is rarely possible to obtain an analytical solution to the HJB PDE. This dissertation proposes a method for approximately solving the HJB PDE based on Sum-Of-Squares (SOS) programming. This SOS algorithm can be used to synthesize controllers, hence solving the OCP, and also compute outer bounds of reachable sets of dynamical systems. This methodology is then extended to infinite time horizons, by proposing SOS algorithms that yield Lyapunov functions that can approximate regions of attraction and attractor sets of nonlinear dynamical systems arbitrarily well.
ContributorsJones, Morgan (Author) / Peet, Matthew M (Thesis advisor) / Nedich, Angelia (Committee member) / Kawski, Matthias (Committee member) / Mignolet, Marc (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2021