This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 11 - 20 of 167
Filtering by

Clear all filters

152694-Thumbnail Image.png
Description
In the field of infectious disease epidemiology, the assessment of model robustness outcomes plays a significant role in the identification, reformulation, and evaluation of preparedness strategies aimed at limiting the impact of catastrophic events (pandemics or the deliberate release of biological agents) or used in the management of disease prevention

In the field of infectious disease epidemiology, the assessment of model robustness outcomes plays a significant role in the identification, reformulation, and evaluation of preparedness strategies aimed at limiting the impact of catastrophic events (pandemics or the deliberate release of biological agents) or used in the management of disease prevention strategies, or employed in the identification and evaluation of control or mitigation measures. The research work in this dissertation focuses on: The comparison and assessment of the role of exponentially distributed waiting times versus the use of generalized non-exponential parametric distributed waiting times of infectious periods on the quantitative and qualitative outcomes generated by Susceptible-Infectious-Removed (SIR) models. Specifically, Gamma distributed infectious periods are considered in the three research projects developed following the applications found in (Bailey 1964, Anderson 1980, Wearing 2005, Feng 2007, Feng 2007, Yan 2008, lloyd 2009, Vergu 2010). i) The first project focuses on the influence of input model parameters, such as the transmission rate, mean and variance of Gamma distributed infectious periods, on disease prevalence, the peak epidemic size and its timing, final epidemic size, epidemic duration and basic reproduction number. Global uncertainty and sensitivity analyses are carried out using a deterministic Susceptible-Infectious-Recovered (SIR) model. The quantitative effect and qualitative relation between input model parameters and outcome variables are established using Latin Hypercube Sampling (LHS) and Partial rank correlation coefficient (PRCC) and Spearman rank correlation coefficient (RCC) sensitivity indices. We learnt that: For relatively low (R0 close to one) to high (mean of R0 equals 15) transmissibility, the variance of the Gamma distribution for the infectious period, input parameter of the deterministic age-of-infection SIR model, is key (statistically significant) on the predictability of the epidemiological variables such as the epidemic duration and the peak size and timing of the prevalence of infectious individuals and therefore, for the predictability these variables, it is preferable to utilize a nonlinear system of Volterra integral equations, rather than a nonlinear system of ordinary differential equations. The predictability of epidemiological variables such as the final epidemic size and the basic reproduction number are unaffected by (or independent of) the variance of the Gamma distribution for the infectious period and therefore for the choice on which type of nonlinear system for the description of the SIR model (VIE's or ODE's) is irrelevant. Although, for practical proposes, with the aim of lowering the complexity and number operations in the numerical methods, a nonlinear system of ordinary differential equations is preferred. The main contribution lies in the development of a model based decision-tool that helps determine when SIR models given in terms of Volterra integral equations are equivalent or better suited than SIR models that only consider exponentially distributed infectious periods. ii) The second project addresses the question of whether or not there is sufficient evidence to conclude that two empirical distributions for a single epidemiological outcome, one generated using a stochastic SIR model under exponentially distributed infectious periods and the other under the non-exponentially distributed infectious period, are statistically dissimilar. The stochastic formulations are modeled via a continuous time Markov chain model. The statistical hypothesis test is conducted using the non-parametric Kolmogorov-Smirnov test. We found evidence that shows that for low to moderate transmissibility, all empirical distribution pairs (generated from exponential and non-exponential distributions) for each of the epidemiological quantities considered are statistically dissimilar. The research in this project helps determine whether the weakening exponential distribution assumption must be considered in the estimation of probability of events defined from the empirical distribution of specific random variables. iii) The third project involves the assessment of the effect of exponentially distributed infectious periods on estimates of input parameter and the associated outcome variable predictions. Quantities unaffected by the use of exponentially distributed infectious period within low transmissibility scenarios include, the prevalence peak time, final epidemic size, epidemic duration and basic reproduction number and for high transmissibility scenarios only the prevalence peak time and final epidemic size. An application designed to determine from incidence data whether there is sufficient statistical evidence to conclude that the infectious period distribution should not be modeled by an exponential distribution is developed. A method for estimating explicitly specified non-exponential parametric probability density functions for the infectious period from epidemiological data is developed. The methodologies presented in this dissertation may be applicable to models where waiting times are used to model transitions between stages, a process that is common in the study of life-history dynamics of many ecological systems.
ContributorsMorales Butler, Emmanuel J (Author) / Castillo-Chavez, Carlos (Thesis advisor) / Aparicio, Juan P (Thesis advisor) / Camacho, Erika T (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
152777-Thumbnail Image.png
Description
The objective of this thesis is to investigate the various types of energy end-uses to be expected in future high efficiency single family residences. For this purpose, this study has analyzed monitored data from 14 houses in the 2013 Solar Decathlon competition, and segregates the energy consumption patterns in various

The objective of this thesis is to investigate the various types of energy end-uses to be expected in future high efficiency single family residences. For this purpose, this study has analyzed monitored data from 14 houses in the 2013 Solar Decathlon competition, and segregates the energy consumption patterns in various residential end-uses (such as lights, refrigerators, washing machines, ...). The analysis was not straight-forward since these homes were operated according to schedules previously determined by the contest rules. The analysis approach allowed the isolation of the comfort energy use by the Heating, Venting and Cooling (HVAC) systems. HVAC are the biggest contributors to energy consumption during operation of a building, and therefore are a prime concern for energy performance during the building design and the operation. Both steady state and dynamic models of comfort energy use which take into account variations in indoor and outdoor temperatures, solar radiation and thermal mass of the building were explicitly considered. Steady State Inverse Models are frequently used for thermal analysis to evaluate HVAC energy performance. These are fast, accurate, offer great flexibility for mathematical modifications and can be applied to a variety of buildings. The results are presented as a horizontal study that compares energy consumption across homes to arrive at a generic rather than unique model - to be used in future discussions in the context of ultra efficient homes. It is suggested that similar analyses of the energy-use data that compare the performance of variety of ultra efficient technologies be conducted to provide more accurate indications of the consumption by end use for future single family residences. These can be used alongside the Residential Energy Consumption Survey (RECS) and the Leading Indicator for Remodeling Activity (LIRA) indices to assist in planning and policy making related to residential energy sector.
ContributorsGarkhail, Rahul (Author) / Reddy, T Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Addison, Marlin (Committee member) / Arizona State University (Publisher)
Created2014
152574-Thumbnail Image.png
Description
Extraordinary medical advances have led to significant reductions in the burden of infectious diseases in humans. However, infectious diseases still account for more than 13 million annual deaths. This large burden is partly due to some pathogens having found suitable conditions to emerge and spread in denser and more connected

Extraordinary medical advances have led to significant reductions in the burden of infectious diseases in humans. However, infectious diseases still account for more than 13 million annual deaths. This large burden is partly due to some pathogens having found suitable conditions to emerge and spread in denser and more connected host populations, and others having evolved to escape the pressures imposed by the rampant use of antimicrobials. It is then critical to improve our understanding of how diseases spread in these modern landscapes, characterized by new host population structures and socio-economic environments, as well as containment measures such as the deployment of drugs. Thus, the motivation of this dissertation is two-fold. First, we study, using both data-driven and modeling approaches, the the spread of infectious diseases in urban areas. As a case study, we use confirmed-cases data on sexually transmitted diseases (STDs) in the United States to assess the conduciveness of population size of urban areas and their socio-economic characteristics as predictors of STD incidence. We find that the scaling of STD incidence in cities is superlinear, and that the percent of African-Americans residing in cities largely determines these statistical patterns. Since disparities in access to health care are often exacerbated in urban areas, within this project we also develop two modeling frameworks to study the effect of health care disparities on epidemic outcomes. Discrepant results between the two approaches indicate that knowledge of the shape of the recovery period distribution, not just its mean and variance, is key for assessing the epidemiological impact of inequalities. The second project proposes to study, from a modeling perspective, the spread of drug resistance in human populations featuring vital dynamics, stochasticity and contact structure. We derive effective treatment regimes that minimize both the overall disease burden and the spread of resistance. Additionally, targeted treatment in structured host populations may lead to higher levels of drug resistance, and if drug-resistant strains are compensated, they can spread widely even when the wild-type strain is below its epidemic threshold.
ContributorsPatterson-Lomba, Oscar (Author) / Castillo-Chavez, Carlos (Thesis advisor) / Towers, Sherry (Thesis advisor) / Chowell-Puente, Gerardo (Committee member) / Arizona State University (Publisher)
Created2014
153461-Thumbnail Image.png
Description
Methods to test hypotheses of mediated effects in the pretest-posttest control group design are understudied in the behavioral sciences (MacKinnon, 2008). Because many studies aim to answer questions about mediating processes in the pretest-posttest control group design, there is a need to determine which model is most appropriate to

Methods to test hypotheses of mediated effects in the pretest-posttest control group design are understudied in the behavioral sciences (MacKinnon, 2008). Because many studies aim to answer questions about mediating processes in the pretest-posttest control group design, there is a need to determine which model is most appropriate to test hypotheses about mediating processes and what happens to estimates of the mediated effect when model assumptions are violated in this design. The goal of this project was to outline estimator characteristics of four longitudinal mediation models and the cross-sectional mediation model. Models were compared on type 1 error rates, statistical power, accuracy of confidence interval coverage, and bias of parameter estimates. Four traditional longitudinal models and the cross-sectional model were assessed. The four longitudinal models were analysis of covariance (ANCOVA) using pretest scores as a covariate, path analysis, difference scores, and residualized change scores. A Monte Carlo simulation study was conducted to evaluate the different models across a wide range of sample sizes and effect sizes. All models performed well in terms of type 1 error rates and the ANCOVA and path analysis models performed best in terms of bias and empirical power. The difference score, residualized change score, and cross-sectional models all performed well given certain conditions held about the pretest measures. These conditions and future directions are discussed.
ContributorsValente, Matthew John (Author) / MacKinnon, David (Thesis advisor) / West, Stephen (Committee member) / Aiken, Leona (Committee member) / Enders, Craig (Committee member) / Arizona State University (Publisher)
Created2015
153109-Thumbnail Image.png
Description
This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the

This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the speed of propagation of the initial crack could differ from different test conditions and different solder materials. A quantitative analysis about the fatigue behavior of SAC lead-free solder under thermal preconditioning process is conducted. This thesis presents a method of making prediction of failure life of solder alloy by building a Weibull regression model. The failure life of solder on circuit board is assumed Weibull distributed. Different materials and test conditions could affect the distribution by changing the shape and scale parameters of Weibull distribution. The method is to model the regression of parameters with different test conditions as predictors based on Bayesian inference concepts. In the process of building regression models, prior distributions are generated according to the previous studies, and Markov Chain Monte Carlo (MCMC) is used under WinBUGS environment.
ContributorsXu, Xinyue (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014
153145-Thumbnail Image.png
Description
The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline silicon PV systems. The evaluation of PV systems aged 5

The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline silicon PV systems. The evaluation of PV systems aged 5 to 30 years old that results in systematic predictive capability that is absent today. The warranty period provided by the manufacturers typically range from 20 to 25 years for crystalline silicon modules. The end of lifetime (for example, the time-to-degrade by 20% from rated power) of PV modules is usually calculated using a simple linear extrapolation based on the annual field degradation rate (say, 0.8% drop in power output per year). It has been 26 years since systematic studies on solar PV module lifetime prediction were undertaken as part of the 11-year flat-plate solar array (FSA) project of the Jet Propulsion Laboratory (JPL) funded by DOE. Since then, PV modules have gone through significant changes in construction materials and design; making most of the field data obsolete, though the effect field stressors on the old designs/materials is valuable to be understood. Efforts have been made to adapt some of the techniques developed to the current technologies, but they are too often limited in scope and too reliant on empirical generalizations of previous results. Some systematic approaches have been proposed based on accelerated testing, but no or little experimental studies have followed. Consequently, the industry does not exactly know today how to test modules for a 20 - 30 years lifetime.

This research study focuses on the behavior of crystalline silicon PV module technology in the dry and hot climatic condition of Tempe/Phoenix, Arizona. A three-phase approach was developed: (1) A quantitative failure modes, effects, and criticality analysis (FMECA) was developed for prioritizing failure modes or mechanisms in a given environment; (2) A time-series approach was used to model environmental stress variables involved and prioritize their effect on the power output drop; and (3) A procedure for developing a prediction model was proposed for the climatic specific condition based on accelerated degradation testing
ContributorsKuitche, Joseph Mathurin (Author) / Pan, Rong (Thesis advisor) / Tamizhmani, Govindasamy (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014
153049-Thumbnail Image.png
Description
Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and

Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and powerful computational tools for obtaining good fMRI designs. However, these results are mainly for basic experimental settings with simple statistical models. In this work, a type of modern fMRI experiments is considered, in which the design matrix of the statistical model depends not only on the selected design, but also on the experimental subject's probabilistic behavior during the experiment. The design matrix is thus uncertain at the design stage, making it diffcult to select good designs. By taking this uncertainty into account, a very efficient approach for obtaining high-quality fMRI designs is developed in this study. The proposed approach is built upon an analytical result, and an efficient computer algorithm. It is shown through case studies that the proposed approach can outperform an existing method in terms of computing time, and the quality of the obtained designs.
ContributorsZhou, Lin (Author) / Kao, Ming-Hung (Thesis advisor) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2014
153063-Thumbnail Image.png
Description
Technological advances have enabled the generation and collection of various data from complex systems, thus, creating ample opportunity to integrate knowledge in many decision making applications. This dissertation introduces holistic learning as the integration of a comprehensive set of relationships that are used towards the learning objective. The holistic view

Technological advances have enabled the generation and collection of various data from complex systems, thus, creating ample opportunity to integrate knowledge in many decision making applications. This dissertation introduces holistic learning as the integration of a comprehensive set of relationships that are used towards the learning objective. The holistic view of the problem allows for richer learning from data and, thereby, improves decision making.

The first topic of this dissertation is the prediction of several target attributes using a common set of predictor attributes. In a holistic learning approach, the relationships between target attributes are embedded into the learning algorithm created in this dissertation. Specifically, a novel tree based ensemble that leverages the relationships between target attributes towards constructing a diverse, yet strong, model is proposed. The method is justified through its connection to existing methods and experimental evaluations on synthetic and real data.

The second topic pertains to monitoring complex systems that are modeled as networks. Such systems present a rich set of attributes and relationships for which holistic learning is important. In social networks, for example, in addition to friendship ties, various attributes concerning the users' gender, age, topic of messages, time of messages, etc. are collected. A restricted form of monitoring fails to take the relationships of multiple attributes into account, whereas the holistic view embeds such relationships in the monitoring methods. The focus is on the difficult task to detect a change that might only impact a small subset of the network and only occur in a sub-region of the high-dimensional space of the network attributes. One contribution is a monitoring algorithm based on a network statistical model. Another contribution is a transactional model that transforms the task into an expedient structure for machine learning, along with a generalizable algorithm to monitor the attributed network. A learning step in this algorithm adapts to changes that may only be local to sub-regions (with a broader potential for other learning tasks). Diagnostic tools to interpret the change are provided. This robust, generalizable, holistic monitoring method is elaborated on synthetic and real networks.
ContributorsAzarnoush, Bahareh (Author) / Runger, George C. (Thesis advisor) / Bekki, Jennifer (Thesis advisor) / Pan, Rong (Committee member) / Saghafian, Soroush (Committee member) / Arizona State University (Publisher)
Created2014
153018-Thumbnail Image.png
Description
Urban scaling analysis has introduced a new scientific paradigm to the study of cities. With it, the notions of size, heterogeneity and structure have taken a leading role. These notions are assumed to be behind the causes for why cities differ from one another, sometimes wildly. However, the mechanisms by

Urban scaling analysis has introduced a new scientific paradigm to the study of cities. With it, the notions of size, heterogeneity and structure have taken a leading role. These notions are assumed to be behind the causes for why cities differ from one another, sometimes wildly. However, the mechanisms by which size, heterogeneity and structure shape the general statistical patterns that describe urban economic output are still unclear. Given the rapid rate of urbanization around the globe, we need precise and formal mathematical understandings of these matters. In this context, I perform in this dissertation probabilistic, distributional and computational explorations of (i) how the broadness, or narrowness, of the distribution of individual productivities within cities determines what and how we measure urban systemic output, (ii) how urban scaling may be expressed as a statistical statement when urban metrics display strong stochasticity, (iii) how the processes of aggregation constrain the variability of total urban output, and (iv) how the structure of urban skills diversification within cities induces a multiplicative process in the production of urban output.
ContributorsGómez-Liévano, Andrés (Author) / Lobo, Jose (Thesis advisor) / Muneepeerakul, Rachata (Thesis advisor) / Bettencourt, Luis M. A. (Committee member) / Chowell-Puente, Gerardo (Committee member) / Arizona State University (Publisher)
Created2014
153391-Thumbnail Image.png
Description
Missing data are common in psychology research and can lead to bias and reduced power if not properly handled. Multiple imputation is a state-of-the-art missing data method recommended by methodologists. Multiple imputation methods can generally be divided into two broad categories: joint model (JM) imputation and fully conditional specification (FCS)

Missing data are common in psychology research and can lead to bias and reduced power if not properly handled. Multiple imputation is a state-of-the-art missing data method recommended by methodologists. Multiple imputation methods can generally be divided into two broad categories: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution (e.g., multivariate normal). FCS, on the other hand, imputes variables one at a time, drawing missing values from a series of univariate distributions. In the single-level context, these two approaches have been shown to be equivalent with multivariate normal data. However, less is known about the similarities and differences of these two approaches with multilevel data, and the methodological literature provides no insight into the situations under which the approaches would produce identical results. This document examined five multilevel multiple imputation approaches (three JM methods and two FCS methods) that have been proposed in the literature. An analytic section shows that only two of the methods (one JM method and one FCS method) used imputation models equivalent to a two-level joint population model that contained random intercepts and different associations across levels. The other three methods employed imputation models that differed from the population model primarily in their ability to preserve distinct level-1 and level-2 covariances. I verified the analytic work with computer simulations, and the simulation results also showed that imputation models that failed to preserve level-specific covariances produced biased estimates. The studies also highlighted conditions that exacerbated the amount of bias produced (e.g., bias was greater for conditions with small cluster sizes). The analytic work and simulations lead to a number of practical recommendations for researchers.
ContributorsMistler, Stephen (Author) / Enders, Craig K. (Thesis advisor) / Aiken, Leona (Committee member) / Levy, Roy (Committee member) / West, Stephen G. (Committee member) / Arizona State University (Publisher)
Created2015