Matching Items (60)
152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
153271-Thumbnail Image.png
Description
This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the

This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the analytic hierarchy process is used, which allows for relative comparisons. The analytic hierarchy process proposes that when making a choice between several alternatives, one should measure the products by comparing them relative to each other. This allows the user to put numbers to subjective criteria. Additionally, evidence suggests that a consumer will often consider not only their own evaluation of a product, but also the choices of other consumers. Thus, the model in this paper applies prospect theory to products with multiple attributes using word of mouth as a criteria in the evaluation.
ContributorsElkholy, Alexander (Author) / Armbruster, Dieter (Thesis advisor) / Kempf, Karl (Committee member) / Li, Hongmin (Committee member) / Arizona State University (Publisher)
Created2014
153290-Thumbnail Image.png
Description
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection.

Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection. A deterministic mathematical model of HIV transmission is developed to evaluate the public-health impact of oral PrEP interventions, and to compare PrEP effectiveness with respect to different evaluation methods. The effects of demographic, behavioral, and epidemic parameters on the PrEP impact are studied in a multivariate sensitivity analysis. Most of the published models on HIV intervention impact assume that the number of individuals joining the sexually active population per year is constant or proportional to the total population. In the second part of this study, three models are presented and analyzed to study the PrEP intervention, with constant, linear, and logistic recruitment rates. How different demographic assumptions can affect the evaluation of PrEP is studied. When provided with data, often least square fitting or similar approaches can be used to determine a single set of approximated parameter values that make the model fit the data best. However, least square fitting only provides point estimates and does not provide information on how strongly the data supports these particular estimates. Therefore, in the third part of this study, Bayesian parameter estimation is applied on fitting ODE model to the related HIV data. Starting with a set of prior distributions for the parameters as initial guess, Bayes' formula can be applied to obtain a set of posterior distributions for the parameters which makes the model fit the observed data best. Evaluating the posterior distribution often requires the integration of high-dimensional functions, which is usually difficult to calculate numerically. Therefore, the Markov chain Monte Carlo (MCMC) method is used to approximate the posterior distribution.
ContributorsZhao, Yuqin (Author) / Kuang, Yang (Thesis advisor) / Taylor, Jesse (Committee member) / Armbruster, Dieter (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
150637-Thumbnail Image.png
Description
Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive compared to the susceptible bacteria species. In some experiments, both

Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive compared to the susceptible bacteria species. In some experiments, both susceptible and resistant bacteria species, as well as phage, can coexist at an equilibrium for hundreds of hours. The current research is inspired by these observations, and the goal is to establish a mathematical model and explore sufficient and necessary conditions for the coexistence. In this dissertation a model with infinite distributed delay terms based on some existing work is established. A rigorous analysis of the well-posedness of this model is provided, and it is proved that the susceptible bacteria persist. To study the persistence of phage species, a "Phage Reproduction Number" (PRN) is defined. The mathematical analysis shows phage persist if PRN > 1 and vanish if PRN < 1. A sufficient condition and a necessary condition for persistence of resistant bacteria are given. The persistence of the phage is essential for the persistence of resistant bacteria. Also, the resistant bacteria persist if its fitness is the same as the susceptible bacteria and if PRN > 1. A special case of the general model leads to a system of ordinary differential equations, for which numerical simulation results are presented.
ContributorsHan, Zhun (Author) / Smith, Hal (Thesis advisor) / Armbruster, Dieter (Committee member) / Kawski, Matthias (Committee member) / Kuang, Yang (Committee member) / Thieme, Horst (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
154081-Thumbnail Image.png
Description
Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical analysis. A deterministic closure model leading to a second

Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical analysis. A deterministic closure model leading to a second order model for the product density and product speed has previously been proposed. The resulting partial differential equations (PDE) are compared to discrete event simulations (DES) that simulate factory production as a time dependent M/M/1 queuing system. Three fundamental scenarios for the time dependent influx are studied: An instant step up/down of the mean arrival rate; an exponential step up/down of the mean arrival rate; and periodic variation of the mean arrival rate. It is shown that the second order model, in general, yields significant improvement over current first order models. Specifically, the agreement between the DES and the PDE for the step up and for periodic forcing that is not too rapid is very good. Adding diffusion to the PDE further improves the agreement. The analysis also points to fundamental open issues regarding the deterministic modeling of low signal-to-noise ratio for some stochastic processes and the possibility of resonance in deterministic models that is not present in the original stochastic process.
ContributorsWienke, Matthew (Author) / Armbruster, Dieter (Thesis advisor) / Jones, Donald (Committee member) / Platte, Rodrigo (Committee member) / Gardner, Carl (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2015
153945-Thumbnail Image.png
Description
Understanding the graphical structure of the electric power system is important

in assessing reliability, robustness, and the risk of failure of operations of this criti-

cal infrastructure network. Statistical graph models of complex networks yield much

insight into the underlying processes that are supported by the network. Such gen-

erative graph models are also

Understanding the graphical structure of the electric power system is important

in assessing reliability, robustness, and the risk of failure of operations of this criti-

cal infrastructure network. Statistical graph models of complex networks yield much

insight into the underlying processes that are supported by the network. Such gen-

erative graph models are also capable of generating synthetic graphs representative

of the real network. This is particularly important since the smaller number of tradi-

tionally available test systems, such as the IEEE systems, have been largely deemed

to be insucient for supporting large-scale simulation studies and commercial-grade

algorithm development. Thus, there is a need for statistical generative models of

electric power network that capture both topological and electrical properties of the

network and are scalable.

Generating synthetic network graphs that capture key topological and electrical

characteristics of real-world electric power systems is important in aiding widespread

and accurate analysis of these systems. Classical statistical models of graphs, such as

small-world networks or Erd}os-Renyi graphs, are unable to generate synthetic graphs

that accurately represent the topology of real electric power networks { networks

characterized by highly dense local connectivity and clustering and sparse long-haul

links.

This thesis presents a parametrized model that captures the above-mentioned

unique topological properties of electric power networks. Specically, a new Cluster-

and-Connect model is introduced to generate synthetic graphs using these parameters.

Using a uniform set of metrics proposed in the literature, the accuracy of the proposed

model is evaluated by comparing the synthetic models generated for specic real

electric network graphs. In addition to topological properties, the electrical properties

are captured via line impedances that have been shown to be modeled reliably by well-studied heavy tailed distributions. The details of the research, results obtained and

conclusions drawn are presented in this document.
ContributorsHu, Jiale (Author) / Sankar, Lalitha (Thesis advisor) / Vittal, Vijay (Committee member) / Scaglione, Anna (Committee member) / Arizona State University (Publisher)
Created2015
155971-Thumbnail Image.png
Description
Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for

Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts -

The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations.

I conclude this dissertation by describing future research directions.
ContributorsWai, Hoi To (Author) / Scaglione, Anna (Thesis advisor) / Berisha, Visar (Committee member) / Nedich, Angelia (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2017
155997-Thumbnail Image.png
Description
This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.

The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.

The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.

The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).

The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.

While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.

The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).
ContributorsFerrari, Lorenzo (Author) / Scaglione, Anna (Thesis advisor) / Bliss, Daniel (Committee member) / Ying, Lei (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2017
156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
ContributorsGilg, Brady (Author) / Armbruster, Dieter (Thesis advisor) / Mittelmann, Hans (Committee member) / Scaglione, Anna (Committee member) / Strogatz, Steven (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2018