Matching Items (88)
171733-Thumbnail Image.png
Description
Multibody Dynamic (MBD) models are important tools in motion analysis and are used to represent and accurately predict the behavior of systems in the real-world. These models have a range of applications, including the stowage and deployment of flexible deployables on spacecraft, the dynamic response of vehicles in automotive design

Multibody Dynamic (MBD) models are important tools in motion analysis and are used to represent and accurately predict the behavior of systems in the real-world. These models have a range of applications, including the stowage and deployment of flexible deployables on spacecraft, the dynamic response of vehicles in automotive design and crash testing, and mapping interactions of the human body. An accurate model can aid in the design of a system to ensure the system is effective and meets specified performance criteria when built. A model may have many design parameters, such as geometrical constraints and component mechanical properties, or controller parameters if the system uses an external controller. Varying these parameters and rerunning analyses by hand to find an ideal design can be time consuming for models that take hours or days to run. To reduce the amount of time required to find a set of parameters that produces a desired performance, optimization is necessary. Many papers have discussed methods for optimizing rigid and flexible MBD models, and separately their controllers, using both gradient-based and gradient-free algorithms. However, these optimization methods have not been used to optimize full-scale MBD models and their controllers simultaneously. This thesis presents a method for co-optimizing an MBD model and controller that allows for the flexibility to find model and controller-based solutions for systems with tightly coupled parameters. Specifically, the optimization is performed on a quadrotor drone MBD model undergoing disturbance from a slung load and its position controller to meet specified position error performance criteria. A gradient-free optimization algorithm and multiple objective approach is used due to the many local optima from the tradeoffs between the model and controller parameters. The thesis uses nine different quadrotor cases with three different position error formulations. The results are used to determine the effectiveness of the optimization and the ability to converge on a single optimal design. After reviewing the results, the optimization limitations are discussed as well as the ability to transition the optimization to work with different MBD models and their controllers.
ContributorsGambatese, Marcus (Author) / Zhang, Wenlong (Thesis advisor) / Berman, Spring (Committee member) / Inoyama, Daisaku (Committee member) / Arizona State University (Publisher)
Created2022
171515-Thumbnail Image.png
Description
The notion of the safety of a system when placed in an environment with humans and other machines has been one of the primary concerns of practitioners while deploying any cyber-physical system (CPS). Such systems, also called safety-critical systems, need to be exhaustively tested for erroneous behavior. This generates the

The notion of the safety of a system when placed in an environment with humans and other machines has been one of the primary concerns of practitioners while deploying any cyber-physical system (CPS). Such systems, also called safety-critical systems, need to be exhaustively tested for erroneous behavior. This generates the need for coming up with algorithms that can help ascertain the behavior and safety of the system by generating tests for the system where they are likely to falsify. In this work, three algorithms have been presented that aim at finding falsifying behaviors in cyber-physical Systems. PART-X intelligently partitions while sampling the input space to provide probabilistic point and region estimates of falsification. PYSOAR-C and LS-EMIBO aims at finding falsifying behaviors in gray-box systems when some information about the system is available. Specifically, PYSOAR-C aims to find falsification while maximizing coverage using a two-phase optimization process, while LS-EMIBO aims at exploiting the structure of a requirement to find falsifications with lower computational cost compared to the state-of-the-art. This work also shows the efficacy of the algorithms on a wide range of complex cyber-physical systems. The algorithms presented in this thesis are available as python toolboxes.
ContributorsKhandait, Tanmay Bhaskar (Author) / Pedrielli, Giulia (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Gopalan, Nakul (Committee member) / Arizona State University (Publisher)
Created2022
170871-Thumbnail Image.png
Description
A new uniaxial testing apparatus that has been proposed takes advantage of less costly methods such as 3D printing of tensile fixtures and image reference markers for accurate data acquisition. The purpose of this research is to find methods to improve the resolution, accuracy, and repeatability of this newly designed

A new uniaxial testing apparatus that has been proposed takes advantage of less costly methods such as 3D printing of tensile fixtures and image reference markers for accurate data acquisition. The purpose of this research is to find methods to improve the resolution, accuracy, and repeatability of this newly designed testing apparatus. The first phase of the research involved building a program that optimized the testing apparatus design depending on the sample being tested. It was found that the design program allowed for quick modifications on the apparatus in order to test a wide variety of samples. The second phase of research was conducted using Finite Elements to determine which sample geometry reduced the impact of misalignment error most. It found that a previously proposed design by Dr. Wonmo Kang when combined with the testing apparatus lead to a large reduction in misalignment errors.
ContributorsAyoub, Yaseen (Author) / Kang, Wonmo (Thesis director) / Kashani, Hamzeh (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2022-12
171424-Thumbnail Image.png
Description
Motor gasoline and diesel contribute 30% to total energy related carbon dioxide (CO2) emissions in the U.S. However, this estimate only accounts for emissions from direct combustion and does not include indirect emissions from processing and fuel movement, even though indirect (scope 3) CO2 emissions are a significant contributor. Gasoline

Motor gasoline and diesel contribute 30% to total energy related carbon dioxide (CO2) emissions in the U.S. However, this estimate only accounts for emissions from direct combustion and does not include indirect emissions from processing and fuel movement, even though indirect (scope 3) CO2 emissions are a significant contributor. Gasoline and diesel flow through a complex supply chain from oil extraction to point of combustion and estimates of their indirect emissions are typically aggregated as national or regional averages and not available at county or city scale. This dissertation presents a novel method to quantify U.S. supply-chain CO2 emissions to the county-scale for gasoline and diesel consumed in the on-road sector. It considers how these fuels flow across the U.S. petroleum infrastructure consisting of pipelines, tankers, trucks, trains, refineries, and blenders. It resolves county-scale indirect CO2 emissions using publicly accessible data to allocate fuel movement between different links and transportation modes across the country. For most of the U.S., the exact volume of fuel moved between counties from combinations of refineries and transportation modes is not explicitly known. To estimate these fuel movements, I use linear optimization with supply and demand related constraints. Estimating on-road gasoline and diesel indirect CO2 emissions at high spatial resolution finds that on-road gasoline CO2 emissions increase by 24% and on-road diesel CO2 emissions increase by 18%. For both fuels there are large variations in the carbon intensity (kgCO2/gal) across the country and the relationship of county carbon intensity with explanatory variables related to fuel supply infrastructure is tested. Regression results indicate that presence of interstate highways, refineries and blenders are inversely related to carbon intensity while presence of fuel pipelines increases diesel carbon intensity. Finally, the on-road gasoline scope 3 CO2 emissions results are assessed in relation to indirect CO2 emissions from electricity consumption at the county scale to analyze the effectiveness of future electric vehicle (EV) transition actions. In this analysis, states with existing EV transition mandates (zero emission vehicle or ‘ZEV’ states) are shown to have on average 12% higher CO2 emissions reduction when transitioning to EVs, over non-ZEV states.
ContributorsMoiz, Taha (Author) / Gurney, Kevin R (Thesis advisor) / Dooley, Kevin J (Thesis advisor) / Parker, Nathan C (Committee member) / Arizona State University (Publisher)
Created2022
154048-Thumbnail Image.png
Description
Vegetative filter strips (VFS) are an effective methodology used for storm water management particularly for large urban parking lots. An optimization model for the design of vegetative filter strips that minimizes the amount of land required for stormwater management using the VFS is developed in this study. The

Vegetative filter strips (VFS) are an effective methodology used for storm water management particularly for large urban parking lots. An optimization model for the design of vegetative filter strips that minimizes the amount of land required for stormwater management using the VFS is developed in this study. The resulting optimization model is based upon the kinematic wave equation for overland sheet flow along with equations defining the cumulative infiltration and infiltration rate.

In addition to the stormwater management function, Vegetative filter strips (VFS) are effective mechanisms for control of sediment flow and soil erosion from agricultural and urban lands. Erosion is a major problem associated with areas subjected to high runoffs or steep slopes across the globe. In order to effect economy in the design of grass filter strips as a mechanism for sediment control & stormwater management, an optimization model is required that minimizes the land requirements for the VFS. The optimization model presented in this study includes an intricate system of equations including the equations defining the sheet flow on the paved and grassed area combined with the equations defining the sediment transport over the vegetative filter strip using a non-linear programming optimization model. In this study, the optimization model has been applied using a sensitivity analysis of parameters such as different soil types, rainfall characteristics etc., performed to validate the model
ContributorsKhatavkar, Puneet N (Author) / Mays, Larry W. (Thesis advisor) / Fox, Peter (Committee member) / Wang, Zhihua (Committee member) / Mascaro, Giuseppe (Committee member) / Arizona State University (Publisher)
Created2015
157899-Thumbnail Image.png
Description
This dissertation develops advanced controls for distributed energy systems and evaluates performance on technical and economic benefits. Microgrids and thermal systems are of primary focus with applications shown for residential, commercial, and military applications that have differing equipment, rate structures, and objectives. Controls development for residential energy heating and cooling

This dissertation develops advanced controls for distributed energy systems and evaluates performance on technical and economic benefits. Microgrids and thermal systems are of primary focus with applications shown for residential, commercial, and military applications that have differing equipment, rate structures, and objectives. Controls development for residential energy heating and cooling systems implement adaptive precooling strategies and thermal energy storage, with comparisons made of each approach separately and then together with precooling and thermal energy storage. Case studies show on-peak demand and annual energy related expenses can be reduced by up to 75.6% and 23.5%, respectively, for a Building America B10 Benchmark home in Phoenix Arizona, Los Angeles California, and Kona Hawaii. Microgrids for commercial applications follow after with increased complexity. Three control methods are developed and compared including a baseline logic-based control, model predictive control, and model predictive control with ancillary service control algorithms. Case studies show that a microgrid consisting of 326 kW solar PV, 634 kW/ 634 kWh battery, and a 350 kW diesel generator can reduce on-peak demand and annual energy related expenses by 82.2% and 44.1%, respectively. Findings also show that employing a model predictive control algorithm with ancillary services can reduce operating expenses by 23.5% when compared to a logic-based algorithm. Microgrid evaluation continues with an investigation of off-grid operation and resilience for military applications. A statistical model is developed to evaluate the survivability (i.e. probability to meet critical load during an islanding event) to serve critical load out to 7 days of grid outage. Case studies compare the resilience of a generator-only microgrid consisting of 5,250 kW in generators and hybrid microgrid consisting of 2,250 kW generators, 3,450 kW / 13,800 kWh storage, and 16,479 kW solar photovoltaics. Findings show that the hybrid microgrid improves survivability by 10.0% and decreases fuel consumption by 47.8% over a 168-hour islanding event when compared to a generator-only microgrid under nominal conditions. Findings in this dissertation can increase the adoption of reliable, low cost, and low carbon distributed energy systems by improving the operational capabilities and economic benefits to a variety of customers and utilities.
ContributorsNelson, James Robert (Author) / Johnson, Nathan (Thesis advisor) / Stadler, Michael (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
158103-Thumbnail Image.png
Description
Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast

Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast and robust even for problems with millions of variables. Therefore, it is desirable to use MILP software to solve mixed integer nonlinear programming (MINLP) problems. For an MINLP problem to be solved by an MILP solver, its nonlinear functions must be transformed to linear ones. The most common method to do the transformation is the piecewise linear approximation (PLA). This dissertation will summarize the types of optimization and the most important tools and methods, and will discuss in depth the PLA tool. PLA will be done using nonuniform partitioning of the domain of the variables involved in the function that will be approximated. Also partial PLA models that approximate only parts of a complicated optimization problem will be introduced. Computational experiments will be done and the results will show that nonuniform partitioning and partial PLA can be beneficial.
ContributorsAlkhalifa, Loay (Author) / Mittelmann, Hans (Thesis advisor) / Armbruster, Hans (Committee member) / Escobedo, Adolfo (Committee member) / Renaut, Rosemary (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2020
158812-Thumbnail Image.png
Description
Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure

Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure that these models exhibit behaviors that closely resemble real neurons.
In order to improve the verisimilitude of these reduced neuron models, I developed an optimizer that uses genetic algorithms to align model behaviors with those observed in experiments.
I verified that this optimizer was able to recover model parameters given only observed physiological data; however, I also found that reduced models nonetheless had limited ability to reproduce all observed behaviors, and that this varied by cell type and desired behavior.
These challenges can partly be surmounted by carefully designing the set of physiological features that guide the optimization. In summary, we found evidence that reduced neuron model optimization had the potential to produce reduced neuron models for only a limited range of neuron types.
ContributorsJarvis, Russell Jarrod (Author) / Crook, Sharon M (Thesis advisor) / Gerkin, Richard C (Thesis advisor) / Zhou, Yi (Committee member) / Abbas, James J (Committee member) / Arizona State University (Publisher)
Created2020
158221-Thumbnail Image.png
Description
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with

The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.

This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ContributorsBiswal, Shiba (Author) / Berman, Spring (Thesis advisor) / Fainekos, Georgios (Committee member) / Lanchier, Nicolas (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2020
158307-Thumbnail Image.png
Description
The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear

The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear Reduced Order Modeling (NLROM) methodology, basis selection and/or identification methodology, to obtain reliable reduced order models of these structures. Focusing on these goals, the work carried out addressed more specifically the following issues:

i) optimization of the basis to capture at best the response in the smallest number of modes,

ii) improved identification of the reduced order model stiffness coefficients,

iii) detection of strongly nonlinear events using NLROM.

For the first issue, an approach was proposed to rotate a limited number of linear modes to become more dominant in the response of the structure. This step was achieved through a proper orthogonal decomposition of the projection on these linear modes of a series of representative nonlinear displacements. This rotation does not expand the modal space but renders that part of the basis more efficient, the identification of stiffness coefficients more reliable, and the selection of dual modes more compact. In fact, a separate approach was also proposed for an independent optimization of the duals. Regarding the second issue, two tuning approaches of the stiffness coefficients were proposed to improve the identification of a limited set of critical coefficients based on independent response data of the structure. Both approaches led to a significant improvement of the static prediction for the clamped-clamped curved beam model. Extensive validations of the NLROMs based on the above novel approaches was carried out by comparisons with full finite element response data. The third issue, the detection of nonlinear events, was finally addressed by building connections between the eigenvalues of the finite element software (Nastran here) and NLROM tangent stiffness matrices and the occurrence of the ‘events’ which is further extended to the assessment of the accuracy with which the NLROM captures the full finite element behavior after the event has occurred.
ContributorsLin, Jinshan (Author) / Mignolet, Marc (Thesis advisor) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Spottswood, Stephen (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2020