Matching Items (6)
Filtering by

Clear all filters

153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
ContributorsGilg, Brady (Author) / Armbruster, Dieter (Thesis advisor) / Mittelmann, Hans (Committee member) / Scaglione, Anna (Committee member) / Strogatz, Steven (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2018
134194-Thumbnail Image.png
Description
This report analyzes the potential for accumulation of boron in direct potable reuse. Direct potable reuse treats water through desalination processes such as reverse osmosis or nanofiltration which can achieve rejection rates of salts sometimes above 90%. However, boron achieves much lower rejection rates near 40%. Because of this low

This report analyzes the potential for accumulation of boron in direct potable reuse. Direct potable reuse treats water through desalination processes such as reverse osmosis or nanofiltration which can achieve rejection rates of salts sometimes above 90%. However, boron achieves much lower rejection rates near 40%. Because of this low rejection rate, there is potential for boron to accumulate in the system to levels that are not recommended for potable human consumption of water. To analyze this issue a code was created that runs a steady state system that tracks the internal concentration, permeate concentration, wastewater concentration and reject concentration at various rejection rates, as well as all the flows. A series of flow and mass balances were performed through five different control volumes that denoted different stages in the water use. First was mixing of clean water with permeate; second, consumptive uses; third, addition of contaminant; fourth, wastewater treatment; fifth, advanced water treatments. The system cycled through each of these a number of times until steady state was reached. Utilities or cities considering employing direct potable reuse could utilize this model by estimating their consumption levels and input of contamination, and then seeing what percent rejection or inflow of makeup water they would need to obtain to keep boron levels at a low enough concentration to be fit for consumption. This code also provides options for analyzing spikes and recovery in the system due to spills, and evaporative uses such as cooling towers and their impact on the system.
ContributorsDoidge, Sydney (Author) / Fox, Peter (Thesis director) / Perreault, Francois (Committee member) / Civil, Environmental and Sustainable Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134517-Thumbnail Image.png
Description
The purpose of this project is to provide our client with a tool to mitigate Company X's franchise-wide inventory control problem. The problem stems from the franchises' initial strategy to buy all inventory as customers brought them in without a quantitative way for buyers to evaluate the store's inventory needs.

The purpose of this project is to provide our client with a tool to mitigate Company X's franchise-wide inventory control problem. The problem stems from the franchises' initial strategy to buy all inventory as customers brought them in without a quantitative way for buyers to evaluate the store's inventory needs. The Excel solution created by our team serves to provide that evaluation for buyers using deseasonalized linear regression to forecast inventory needs for clothing of different sizes and seasons by month. When looking at the provided sales data from 2014-2016, there was a clear seasonal trend, so the appropriate forecasting model was determined by testing 3 models: Triple Exponential Smoothing model, Deseasonalized Simple Linear Regression, and Multiple Linear Regression.The model calculates monthly optimal inventory levels (current period plus future 2 periods of inventory). All of the models were evaluated using the lowest mean absolute error (meaning best fit with the data), and the model with best fit was Deseasonalized Simple Linear Regression, which was then used to build the Excel tool. Buyers can use the Excel tool built with this forecasting model to evaluate whether or not to buy a given item of any size or season. To do this, the model uses the previous year's sales data to forecast optimal inventory level and compares it to the stores' current inventory level. If the current level is less than the optimal level, the cell housing current value will turn green (buy). If the currently level is greater than or equal to optimal level or less than optimal inventory level*1.05, current value will turn yellow (buy only if good quality). If the current level is greater than optimal level*1.05 current level will be red (don't buy). We recommend both stores implement a way of keeping track of how many clothing items held in each bin to keep more accurate inventory count. In addition, the model's utility will be of limited use until both stores' inventories are at a level where they can afford to buy. Therefore, it is in the client's best interest to liquidate stale inventor into store credit or cash In the future, the team would also like to develop a pricing model to better meet the needs of the client's two locations.
ContributorsUribes-Yanez, Diego (Co-author) / Liu, Jessica (Co-author) / Taylor, Todd (Thesis director) / Gentile, Erica (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / School of International Letters and Cultures (Contributor) / School of Life Sciences (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
157690-Thumbnail Image.png
Description
The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling

The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling can be split into two groups: microscopic and macroscopic models. Microscopic models described the motion of so-called agents (e.g. cells, ants) that interact with their surrounding neighbors. The interactions among these agents form at a large scale some special structures such as flocking and swarming. One of the key questions is to relate the particular interactions among agents with the overall emerging structures. Macroscopic models are precisely designed to describe the evolution of such large structures. They are usually given as partial differential equations describing the time evolution of a density distribution (instead of tracking each individual agent). For instance, reaction-diffusion equations are used to model glioma cells and are being used to predict tumor growth. This dissertation aims at developing such a framework to better understand the complex behavior of foraging ants and glioma cells.
ContributorsJamous, Sara Sami (Author) / Motsch, Sebastien (Thesis advisor) / Armbruster, Dieter (Committee member) / Camacho, Erika (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2019
166244-Thumbnail Image.png
Description

Public education and involvement with evolutionary theory has long been limited by both the complexity of the subject and societal pushback. Furthermore, effective and engaging evolution education has become an elusive feat that often fails to reflect the types of questions that evolution research attempts to address. Here, we explore

Public education and involvement with evolutionary theory has long been limited by both the complexity of the subject and societal pushback. Furthermore, effective and engaging evolution education has become an elusive feat that often fails to reflect the types of questions that evolution research attempts to address. Here, we explore the best methods to present scientific research using interactive educational models to facilitate the learning experience of the audience most effectively. By creating artistic and game-play oriented models, it becomes possible to simplify the multifaceted aspects of evolution research such that it enables a larger, more inclusive, audience to better comprehend these complexities. In allowing the public to engage with highly interactive education materials, the full spectrum of the scientific process, from hypothesis construction to experimental testing, can be experienced and understood. Providing information about current cancer evolution research in a way that is easy to access and understand and accompanying it with an interactive model that reflects this information and reinforces learning shows that research platforms can be translated into interactive teaching tools that make understanding evolutionary theory more accessible.

ContributorsSilva, Yasmin (Author) / Maley, Carlo (Thesis director) / Compton, Zachary (Committee member) / Baciu, Cristina (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / School of Life Sciences (Contributor)
Created2022-05