Matching Items (68)
152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151356-Thumbnail Image.png
Description
A thorough exploration of star formation necessitates observation across the electromagnetic spectrum. In particular, observations in the submillimeter and ultra-violet allow one to observe very early stage star formation and to trace the evolution from molecular cloud collapse to stellar ignition. Submillimeter observations are essential for piercing the heart of

A thorough exploration of star formation necessitates observation across the electromagnetic spectrum. In particular, observations in the submillimeter and ultra-violet allow one to observe very early stage star formation and to trace the evolution from molecular cloud collapse to stellar ignition. Submillimeter observations are essential for piercing the heart of heavily obscured stellar nurseries to observe star formation in its infancy. Ultra-violet observations allow one to observe stars just after they emerge from their surrounding environment, allowing higher energy radiation to escape. To make detailed observations of early stage star formation in both spectral regimes requires state-of-the-art detector technology and instrumentation. In this dissertation, I discuss the calibration and feasibility of detectors developed by Lawrence Berkeley National Laboratory and specially processed at the Jet Propulsion Laboratory to increase their quantum efficiency at far-ultraviolet wavelengths. A cursory treatment of the delta-doping process is presented, followed by a thorough discussion of calibration procedures developed at JPL and in the Laboratory for Astronomical and Space Instrumentation at ASU. Subsequent discussion turns to a novel design for a Modular Imager Cell forming one possible basis for construction of future large focal plane arrays. I then discuss the design, fabrication, and calibration of a sounding rocket imaging system developed using the MIC and these specially processed detectors. Finally, I discuss one scientific application of sub-mm observations. I used data from the Heinrich Hertz Sub-millimeter Telescope and the Sub-Millimeter Array (SMA) to observe sub-millimeter transitions and continuum emission towards AFGL 2591. I tested the use of vibrationally excited HCN emission to probe the protostellar accretion disk structure. I measured vibrationally excited HCN line ratios in order to elucidate the appropriate excitation mechanism. I find collisional excitation to be dominant, showing the emission originates in extremely dense (n&sim10;11 cm-3), warm (T&sim1000; K) gas. Furthermore, from the line profile of the v=(0, 22d, 0) transition, I find evidence for a possible accretion disk.
ContributorsVeach, Todd Justin (Author) / Scowen, Paul A (Thesis advisor) / Groppi, Christopher E (Thesis advisor) / Beasley, Matthew N (Committee member) / Rhoads, James E (Committee member) / Windhorst, Rogier A (Committee member) / Arizona State University (Publisher)
Created2012
151434-Thumbnail Image.png
Description
Understanding the properties and formation histories of individual stars in galaxies remains one of the most important areas in astrophysics. The impact of the Hubble Space Telescope<\italic> (HST<\italic>) has been revolutionary, providing deep observations of nearby galaxies at high resolution and unprecedented sensitivity over a wavelength range from near-ultraviolet to

Understanding the properties and formation histories of individual stars in galaxies remains one of the most important areas in astrophysics. The impact of the Hubble Space Telescope<\italic> (HST<\italic>) has been revolutionary, providing deep observations of nearby galaxies at high resolution and unprecedented sensitivity over a wavelength range from near-ultraviolet to near-infrared. In this study, I use deep HST<\italic> imaging observations of three nearby star-forming galaxies (M83, NGC 4214, and CGCG 269-049) based on the HST<\italic> observations, in order to provide to construct color-magnitude and color-color diagrams of their resolved stellar populations. First, I select 50 regions in the spiral arm and inter-arm areas of M83, and determine the age distribution of the luminous stellar populations in each region. I developed an innovative method of star-by-star correction for internal extinction to improve stellar age and mass estimates. I compare the extinction-corrected ages of the 50 regions with those determined from several independent methods. The young stars are much more likely to be found in concentrated aggregates along spiral arms, while older stars are more dispersed. These results are consistent with a scenario where star formation is associated with the spiral arms, and stars form primarily in star clusters before dispersing on short timescales to form the field population. I address the effects of spatial resolution on the measured colors, magnitudes, and age estimates. While individual stars can occasionally show measurable differences in the colors and magnitudes, the age estimates for entire regions are only slightly affected. The same procedure is applied to nearby starbursting dwarf NGC 4214 to study the distributions of young and old stellar populations. Lastly, I describe the analysis of the HST<\italic> and Spitzer Space Telescope<\italic> observations of the extremely metal-poor dwarf galaxy (XMPG) CGCG 269-049 at a distance of 4.96 Mpc. This galaxy is one of the most metal-poor known with 12+log(O/H)=7.43. I find clear evidence for the presence of an old stellar population in CGCG~269-049, ruling out the possibility that this galaxy is forming its first generation of stars, as originally proposed for XMPGs. This comprehensive study of resolved stellar populations in three nearby galaxies provides detailed view of the current state of star formation and evolution of galaxies.
ContributorsKim, Hwihyun (Author) / Windhorst, Rogier A (Thesis advisor) / Jansen, Rolf A (Committee member) / Rhoads, James E (Committee member) / Scannapieco, Evan (Committee member) / Young, Patrick (Committee member) / Arizona State University (Publisher)
Created2012
152408-Thumbnail Image.png
Description
Quasars, the visible phenomena associated with the active accretion phase of super- massive black holes found in the centers of galaxies, represent one of the most energetic processes in the Universe. As matter falls into the central black hole, it is accelerated and collisionally heated, and the radiation emitted can

Quasars, the visible phenomena associated with the active accretion phase of super- massive black holes found in the centers of galaxies, represent one of the most energetic processes in the Universe. As matter falls into the central black hole, it is accelerated and collisionally heated, and the radiation emitted can outshine the combined light of all the stars in the host galaxy. Studies of quasar host galaxies at ultraviolet to near-infrared wavelengths are fundamentally limited by the precision with which the light from the central quasar accretion can be disentangled from the light of stars in the surrounding host galaxy. In this Dissertation, I discuss direct imaging of quasar host galaxies at redshifts z ≃ 2 and z ≃ 6 using new data obtained with the Hubble Space Telescope. I describe a new method for removing the point source flux using Markov Chain Monte Carlo parameter estimation and simultaneous modeling of the point source and host galaxy. I then discuss applications of this method to understanding the physical properties of high-redshift quasar host galaxies including their structures, luminosities, sizes, and colors, and inferred stellar population properties such as age, mass, and dust content.
ContributorsMechtley, Matt R (Author) / Windhorst, Rogier A (Thesis advisor) / Butler, Nathaniel (Committee member) / Jansen, Rolf A (Committee member) / Rhoads, James (Committee member) / Scowen, Paul (Committee member) / Arizona State University (Publisher)
Created2014
153170-Thumbnail Image.png
Description
Advances in experimental techniques have allowed for investigation of molecular dynamics at ever smaller temporal and spatial scales. There is currently a varied and growing body of literature which demonstrates the phenomenon of \emph{anomalous diffusion} in physics, engineering, and biology. In particular many diffusive type processes in the cell have

Advances in experimental techniques have allowed for investigation of molecular dynamics at ever smaller temporal and spatial scales. There is currently a varied and growing body of literature which demonstrates the phenomenon of \emph{anomalous diffusion} in physics, engineering, and biology. In particular many diffusive type processes in the cell have been observed to follow a power law $\left \propto t^\alpha$ scaling of the mean square displacement of a particle. This contrasts with the expected linear behavior of particles undergoing normal diffusion. \emph{Anomalous sub-diffusion} ($\alpha<1$) has been attributed to factors such as cytoplasmic crowding of macromolecules, and trap-like structures in the subcellular environment non-linearly slowing the diffusion of molecules. Compared to normal diffusion, signaling molecules in these constrained spaces can be more concentrated at the source, and more diffuse at longer distances, potentially effecting the signalling dynamics. As diffusion at the cellular scale is a fundamental mechanism of cellular signaling and additionally is an implicit underlying mathematical assumption of many canonical models, a closer look at models of anomalous diffusion is warranted. Approaches in the literature include derivations of fractional differential diffusion equations (FDE) and continuous time random walks (CTRW). However these approaches are typically based on \emph{ad-hoc} assumptions on time- and space- jump distributions. We apply recent developments in asymptotic techniques on collisional kinetic equations to develop a FDE model of sub-diffusion due to trapping regions and investigate the nature of the space/time probability distributions assosiated with trapping regions. This approach both contrasts and compliments the stochastic CTRW approach by positing more physically realistic underlying assumptions on the motion of particles and their interactions with trapping regions, and additionally allowing varying assumptions to be applied individually to the traps and particle kinetics.
ContributorsHoleva, Thomas Matthew (Author) / Ringhofer, Christian (Thesis advisor) / Baer, Steve (Thesis advisor) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Taylor, Jesse (Committee member) / Arizona State University (Publisher)
Created2014
153271-Thumbnail Image.png
Description
This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the

This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the analytic hierarchy process is used, which allows for relative comparisons. The analytic hierarchy process proposes that when making a choice between several alternatives, one should measure the products by comparing them relative to each other. This allows the user to put numbers to subjective criteria. Additionally, evidence suggests that a consumer will often consider not only their own evaluation of a product, but also the choices of other consumers. Thus, the model in this paper applies prospect theory to products with multiple attributes using word of mouth as a criteria in the evaluation.
ContributorsElkholy, Alexander (Author) / Armbruster, Dieter (Thesis advisor) / Kempf, Karl (Committee member) / Li, Hongmin (Committee member) / Arizona State University (Publisher)
Created2014
153290-Thumbnail Image.png
Description
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection.

Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection. A deterministic mathematical model of HIV transmission is developed to evaluate the public-health impact of oral PrEP interventions, and to compare PrEP effectiveness with respect to different evaluation methods. The effects of demographic, behavioral, and epidemic parameters on the PrEP impact are studied in a multivariate sensitivity analysis. Most of the published models on HIV intervention impact assume that the number of individuals joining the sexually active population per year is constant or proportional to the total population. In the second part of this study, three models are presented and analyzed to study the PrEP intervention, with constant, linear, and logistic recruitment rates. How different demographic assumptions can affect the evaluation of PrEP is studied. When provided with data, often least square fitting or similar approaches can be used to determine a single set of approximated parameter values that make the model fit the data best. However, least square fitting only provides point estimates and does not provide information on how strongly the data supports these particular estimates. Therefore, in the third part of this study, Bayesian parameter estimation is applied on fitting ODE model to the related HIV data. Starting with a set of prior distributions for the parameters as initial guess, Bayes' formula can be applied to obtain a set of posterior distributions for the parameters which makes the model fit the observed data best. Evaluating the posterior distribution often requires the integration of high-dimensional functions, which is usually difficult to calculate numerically. Therefore, the Markov chain Monte Carlo (MCMC) method is used to approximate the posterior distribution.
ContributorsZhao, Yuqin (Author) / Kuang, Yang (Thesis advisor) / Taylor, Jesse (Committee member) / Armbruster, Dieter (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
150890-Thumbnail Image.png
Description
Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this ga

Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this gap between simulation and observation by rendering the model output in image format which is then processed by tools commonly used in observational astronomy. Images are synthesized in various filters by folding the output of cosmological simulations of gasdynamics with star-formation and dark matter with the Bruzual- Charlot stellar population synthesis models. A variation of the Virgo-Gadget numerical simulation code is used with the hybrid gas and stellar formation models of Springel and Hernquist (2003). Outputs taken at various redshifts are stacked to create a synthetic view of the simulated star clusters. Source Extractor (SExtractor) is used to find groupings of stellar populations which are considered as galaxies or galaxy building blocks and photometry used to estimate the rest frame luminosities and distribution functions. With further refinements, this is expected to provide support for missions such as JWST, as well as to probe what additional physics are needed to model the data. The results show good agreement in many respects with observed properties of the galaxy luminosity function (LF) over a wide range of high redshifts. In particular, the slope (alpha) when fitted to the standard Schechter function shows excellent agreement both in value and evolution with redshift, when compared with observation. Discrepancies of other properties with observation are seen to be a result of limitations of the simulation and additional feedback mechanisms which are needed.
ContributorsMorgan, Robert (Author) / Windhorst, Rogier A (Thesis advisor) / Scannapieco, Evan (Committee member) / Rhoads, James (Committee member) / Gardner, Carl (Committee member) / Belitsky, Andrei (Committee member) / Arizona State University (Publisher)
Created2012
150637-Thumbnail Image.png
Description
Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive compared to the susceptible bacteria species. In some experiments, both

Bacteriophage (phage) are viruses that infect bacteria. Typical laboratory experiments show that in a chemostat containing phage and susceptible bacteria species, a mutant bacteria species will evolve. This mutant species is usually resistant to the phage infection and less competitive compared to the susceptible bacteria species. In some experiments, both susceptible and resistant bacteria species, as well as phage, can coexist at an equilibrium for hundreds of hours. The current research is inspired by these observations, and the goal is to establish a mathematical model and explore sufficient and necessary conditions for the coexistence. In this dissertation a model with infinite distributed delay terms based on some existing work is established. A rigorous analysis of the well-posedness of this model is provided, and it is proved that the susceptible bacteria persist. To study the persistence of phage species, a "Phage Reproduction Number" (PRN) is defined. The mathematical analysis shows phage persist if PRN > 1 and vanish if PRN < 1. A sufficient condition and a necessary condition for persistence of resistant bacteria are given. The persistence of the phage is essential for the persistence of resistant bacteria. Also, the resistant bacteria persist if its fitness is the same as the susceptible bacteria and if PRN > 1. A special case of the general model leads to a system of ordinary differential equations, for which numerical simulation results are presented.
ContributorsHan, Zhun (Author) / Smith, Hal (Thesis advisor) / Armbruster, Dieter (Committee member) / Kawski, Matthias (Committee member) / Kuang, Yang (Committee member) / Thieme, Horst (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015