Matching Items (48)
157561-Thumbnail Image.png
Description
Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive

Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive (or negative) outcome. This research investigates and proposes alternative designs to alleviate the problem of separation in small-sample D-optimal designs for the logistic regression model. Separation causes the non-existence of maximum likelihood parameter estimates and presents a serious problem for model fitting purposes.

First, it is shown that exact, multi-factor D-optimal designs for the logistic regression model can be susceptible to separation. Several logistic regression models are specified, and exact D-optimal designs of fixed sizes are constructed for each model. Sets of simulated response data are generated to estimate the probability of separation in each design. This study proves through simulation that small-sample D-optimal designs are prone to separation and that separation risk is dependent on the specified model. Additionally, it is demonstrated that exact designs of equal size constructed for the same models may have significantly different chances of encountering separation.

The second portion of this research establishes an effective strategy for augmentation, where additional design runs are judiciously added to eliminate separation that has occurred in an initial design. A simulation study is used to demonstrate that augmenting runs in regions of maximum prediction variance (MPV), where the predicted probability of either response category is 50%, most reliably eliminates separation. However, it is also shown that MPV augmentation tends to yield augmented designs with lower D-efficiencies.

The final portion of this research proposes a novel compound optimality criterion, DMP, that is used to construct locally optimal and robust compromise designs. A two-phase coordinate exchange algorithm is implemented to construct exact locally DMP-optimal designs. To address design dependence issues, a maximin strategy is proposed for designating a robust DMP-optimal design. A case study demonstrates that the maximin DMP-optimal design maintains comparable D-efficiencies to a corresponding Bayesian D-optimal design while offering significantly improved separation performance.
ContributorsPark, Anson Robert (Author) / Montgomery, Douglas C. (Thesis advisor) / Mancenido, Michelle V (Thesis advisor) / Escobedo, Adolfo R. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2019
157514-Thumbnail Image.png
Description
One of the critical issues in the U.S. healthcare sector is attributed to medications management. Mismanagement of medications can not only bring more unfavorable medical outcomes for patients, but also imposes avoidable medical expenditures, which can be partially accounted for the enormous $750 billion that the American healthcare system wastes

One of the critical issues in the U.S. healthcare sector is attributed to medications management. Mismanagement of medications can not only bring more unfavorable medical outcomes for patients, but also imposes avoidable medical expenditures, which can be partially accounted for the enormous $750 billion that the American healthcare system wastes annually. The lack of efficiency in medical outcomes can be due to several reasons. One of them is the problem of drug intensification: a problem associated with more aggressive management of medications and its negative consequences for patients.

To address this and many other challenges in regard to medications mismanagement, I take advantage of data-driven methodologies where a decision-making framework for identifying optimal medications management strategies will be established based on real-world data. This data-driven approach has the advantage of supporting decision-making processes by data analytics, and hence, the decision made can be validated by verifiable data. Thus, compared to merely theoretical methods, my methodology will be more applicable to patients as the ultimate beneficiaries of the healthcare system.

Based on this premise, in this dissertation I attempt to analyze and advance three streams of research that are influenced by issues involving the management of medications/treatments for different medical contexts. In particular, I will discuss (1) management of medications/treatment modalities for new-onset of diabetes after solid organ transplantations and (2) epidemic of opioid prescription and abuse.
ContributorsBoloori, Alireza (Author) / Saghafian, Soroush (Thesis advisor) / Fowler, John (Thesis advisor) / Gel, Esma (Committee member) / Cook, Curtiss B (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2019
158694-Thumbnail Image.png
Description
In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge

In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge from experienced physicians. Active learning addresses this challenge with an iterative process to select instances from the unlabeled data to annotate and improve the supervised learner. At each step, the query of examples to be labeled can be considered as a dilemma between exploitation of the supervised learner's current knowledge and exploration of the unlabeled input features.

Motivated by the need for efficient active learning strategies, this dissertation proposes new algorithms for batch-mode, pool-based active learning. The research considers the following questions: how can unsupervised knowledge of the input features (exploration) improve learning when incorporated with supervised learning (exploitation)? How to characterize exploration in active learning when data is high-dimensional? Finally, how to adaptively make a balance between exploration and exploitation?

The first contribution proposes a new active learning algorithm, Cluster-based Stochastic Query-by-Forest (CSQBF), which provides a batch-mode strategy that accelerates learning with added value from exploration and improved exploitation scores. CSQBF balances exploration and exploitation using a probabilistic scoring criterion based on classification probabilities from a tree-based ensemble model within each data cluster.

The second contribution introduces two more query strategies, Double Margin Active Learning (DMAL) and Cluster Agnostic Active Learning (CAAL), that combine consistent exploration and exploitation modules into a coherent and unified measure for label query. Instead of assuming a fixed clustering structure, CAAL and DMAL adopt a soft-clustering strategy which provides a new approach to formalize exploration in active learning.

The third contribution addresses the challenge of dynamically making a balance between exploration and exploitation criteria throughout the active learning process. Two adaptive algorithms are proposed based on feedback-driven bandit optimization frameworks that elegantly handle this issue by learning the relationship between exploration-exploitation trade-off and an active learner's performance.
ContributorsShams, Ghazal (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Escobedo, Adolfo (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
158883-Thumbnail Image.png
Description
Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all active effects. In this research, two-level nonregular screening designs with orthogonal main effects will be discussed. By utilizing knowledge of the alias structure, a design based model selection process for analyzing nonregular designs is proposed.

The Aliased Informed Model Selection (AIMS) strategy is a design specific approach that is compared to three generic model selection methods; stepwise regression, least absolute shrinkage and selection operator (LASSO), and the Dantzig selector. The AIMS approach substantially increases the power to detect active main effects and two-factor interactions versus the aforementioned generic methodologies. This research identifies design specific model spaces; sets of models with strong heredity, all estimable, and exhibit no model confounding. These spaces are then used in the AIMS method along with design specific aliasing rules for model selection decisions. Model spaces and alias rules are identified for three designs; 16-run no-confounding 6, 7, and 8-factor designs. The designs are demonstrated with several examples as well as simulations to show the AIMS superiority in model selection.

A final piece of the research provides a method for augmenting no-confounding designs based on a model spaces and maximum average D-efficiency. Several augmented designs are provided for different situations. A final simulation with the augmented designs shows strong results for augmenting four additional runs if time and resources permit.
ContributorsMetcalfe, Carly E (Author) / Montgomery, Douglas C. (Thesis advisor) / Jones, Bradley (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
158398-Thumbnail Image.png
Description
The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power

The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power rating. To quantify the reliability of such PV modules, the Accelerated Life Testing (ALT) plays an important role. But there are several obstacles that needs to be tackled to conduct such experiments, since there has not been enough historical field data available. Even if some time-series performance data of maximum output power (Pmax) is available, it may not be useful to develop failure/degradation mode-specific accelerated tests. This is because, to study the specific failure modes, it is essential to use failure mode-specific performance variable (like short circuit current, open circuit voltage or fill factor) that is directly affected by the failure mode, instead of overall power which would be affected by one or more of the performance variables. Hence, to address several of the above-mentioned issues, this research is divided into three phases. The first phase deals with developing models to study climate specific failure modes using failure mode specific parameters instead of power degradation. The limited field data collected after a long time (say 18-21 years), is utilized to model the degradation rate and the developed model is then calibrated to account for several unknown environmental effects using the available qualification testing data. The second phase discusses the cumulative damage modeling method to quantify the effects of various environmental variables on the overall power production of the photovoltaic module. Mainly, this cumulative degradation modeling approach is used to model the power degradation path and quantify the effects of high frequency multiple environmental input data (like temperature, humidity measured every minute or hour) with very sparse response data (power measurements taken quarterly or annually). The third phase deals with optimal planning and inference framework using Iterative-Accelerated Life Testing (I-ALT) methodology. All the proposed methodologies are demonstrated and validated using appropriate case studies.
ContributorsBala Subramaniyan, Arun (Author) / Pan, Rong (Thesis advisor) / Tamizhmani, Govindasamy (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Kuitche, Joseph (Committee member) / Arizona State University (Publisher)
Created2020
158520-Thumbnail Image.png
Description
In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed.

Under sparsity, the structure

In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed.

Under sparsity, the structure of DSDs can allow for the screening and optimization of a system in one step, but in non-sparse situations estimation of second-order models requires augmentation of the DSD. In this work, augmentation strategies for DSDs were considered, given the assumption that the correct form of the model for the response of interest is quadratic. Series of augmented designs were constructed and explored, and power calculations, model-robustness criteria, model-discrimination criteria, and simulation study results were used to identify the number of augmented runs necessary for (1) effectively identifying active model effects, and (2) precisely predicting a response of interest. When the goal is identification of active effects, it is shown that supersaturated designs are sufficient; when the goal is prediction, it is shown that little is gained by augmenting beyond the design that is saturated for the full quadratic model. Surprisingly, augmentation strategies based on the I-optimality criterion do not lead to better predictions than strategies based on the D-optimality criterion.

Computational limitations can render standard statistical methods infeasible in the face of massive datasets, necessitating subsampling strategies. In the big data context, the primary objective is often prediction but the correct form of the model for the response of interest is likely unknown. Here, two new methods of subdata selection were proposed. The first is based on clustering, the second is based on space-filling designs, and both are free from model assumptions. The performance of the proposed methods was explored visually via low-dimensional simulated examples; via real data applications; and via large simulation studies. In all cases the proposed methods were compared to existing, widely used subdata selection methods. The conditions under which the proposed methods provide advantages over standard subdata selection strategies were identified.
ContributorsNachtsheim, Abigael (Author) / Stufken, John (Thesis advisor) / Fricks, John (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2020
158154-Thumbnail Image.png
Description
Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely reliability information of the system under test. This dissertation develops methodologies for the ADT data modeling and analysis.

In the first part of this dissertation, ADT is introduced along with three major challenges in the ADT data analysis – modeling framework, inference method, and the need of analyzing multi-dimensional processes. To overcome these challenges, in the second part, a hierarchical approach, that leads to a nonlinear mixed-effects regression model, to modeling a univariate degradation process is developed. With this modeling framework, the issues of ignoring uncertainties in both data analysis and lifetime prediction, as presented by an International Standard Organization (ISO) standard, are resolved. In the third part, an approach to modeling a bivariate degradation process is addressed. It is developed using the copula theory that brings the benefits of both model flexibility and inference convenience. This approach is provided with an efficient Bayesian method for reliability evaluation. In the last part, an extension to a multivariate modeling framework is developed. Three fundamental copula classes are applied to model the complex dependence structure among correlated degradation processes. The advantages of the proposed modeling framework and the effect of ignoring tail dependence are demonstrated through simulation studies. The applications of the copula-based multivariate degradation models on both system reliability evaluation and remaining useful life prediction are provided.

In summary, this dissertation studies and explores the use of statistical methods in analyzing ADT data. All proposed methodologies are demonstrated by case studies.
ContributorsFANG, GUANQI (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Ju, Feng (Committee member) / Hong, Yili (Committee member) / Arizona State University (Publisher)
Created2020
153607-Thumbnail Image.png
Description
Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that

Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that people are involved in their design and operation. Our understanding of such systems is limited because their behaviour cannot be characterized using traditional techniques of modelling and analysis.

As a step in model development, statistically designed screening experiments may be used to identify the main effects and interactions most significant on a response of a system. However, traditional approaches for screening are ineffective for complex systems because of the size of the experimental design. Consequently, the factors considered are often restricted, but this automatically restricts the interactions that may be identified as well. Alternatively, the designs are restricted to only identify main effects, but this then fails to consider any possible interactions of the factors.

To address this problem, a specific combinatorial design termed a locating array is proposed as a screening design for complex systems. Locating arrays exhibit logarithmic growth in the number of factors because their focus is on identification rather than on measurement. This makes practical the consideration of an order of magnitude more factors in experimentation than traditional screening designs.

As a proof-of-concept, a locating array is applied to screen for main effects and low-order interactions on the response of average transport control protocol (TCP) throughput in a simulation model of a mobile ad hoc network (MANET). A MANET is a collection of mobile wireless nodes that self-organize without the aid of any centralized control or fixed infrastructure. The full-factorial design for the MANET considered is infeasible (with over 10^{43} design points) yet a locating array has only 421 design points.

In conjunction with the locating array, a ``heavy hitters'' algorithm is developed to identify the influential main effects and two-way interactions, correcting for the non-normal distribution of the average throughput, and uneven coverage of terms in the locating array. The significance of the identified main effects and interactions is validated independently using the statistical software JMP.

The statistical characteristics used to evaluate traditional screening designs are also applied to locating arrays.

These include the matrix of covariance, fraction of design space, and aliasing, among others. The results lend additional support to the use of locating arrays as screening designs.

The use of locating arrays as screening designs for complex engineered systems is promising as they yield useful models. This facilitates quantitative evaluation of architectures and protocols and contributes to our understanding of complex engineered networks.
ContributorsAldaco-Gastelum, Abraham Netzahualcoyotl (Author) / Syrotiuk, Violet R. (Thesis advisor) / Colbourn, Charles J. (Committee member) / Sen, Arunabha (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2015