This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 51 - 58 of 58
Filtering by

Clear all filters

157893-Thumbnail Image.png
Description
One of the premier technologies for studying human brain functions is the event-related functional magnetic resonance imaging (fMRI). The main design issue for such experiments is to find the optimal sequence for mental stimuli. This optimal design sequence allows for collecting informative data to make precise statistical inferences about the

One of the premier technologies for studying human brain functions is the event-related functional magnetic resonance imaging (fMRI). The main design issue for such experiments is to find the optimal sequence for mental stimuli. This optimal design sequence allows for collecting informative data to make precise statistical inferences about the inner workings of the brain. Unfortunately, this is not an easy task, especially when the error correlation of the response is unknown at the design stage. In the literature, the maximin approach was proposed to tackle this problem. However, this is an expensive and time-consuming method, especially when the correlated noise follows high-order autoregressive models. The main focus of this dissertation is to develop an efficient approach to reduce the amount of the computational resources needed to obtain A-optimal designs for event-related fMRI experiments. One proposed idea is to combine the Kriging approximation method, which is widely used in spatial statistics and computer experiments with a knowledge-based genetic algorithm. Through case studies, a demonstration is made to show that the new search method achieves similar design efficiencies as those attained by the traditional method, but the new method gives a significant reduction in computing time. Another useful strategy is also proposed to find such designs by considering only the boundary points of the parameter space of the correlation parameters. The usefulness of this strategy is also demonstrated via case studies. The first part of this dissertation focuses on finding optimal event-related designs for fMRI with simple trials when each stimulus consists of only one component (e.g., a picture). The study is then extended to the case of compound trials when stimuli of multiple components (e.g., a cue followed by a picture) are considered.
ContributorsAlrumayh, Amani (Author) / Kao, Ming-Hung (Thesis advisor) / Stufken, John (Committee member) / Reiser, Mark R. (Committee member) / Pan, Rong (Committee member) / Cheng, Dan (Committee member) / Arizona State University (Publisher)
Created2019
157561-Thumbnail Image.png
Description
Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive

Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive (or negative) outcome. This research investigates and proposes alternative designs to alleviate the problem of separation in small-sample D-optimal designs for the logistic regression model. Separation causes the non-existence of maximum likelihood parameter estimates and presents a serious problem for model fitting purposes.

First, it is shown that exact, multi-factor D-optimal designs for the logistic regression model can be susceptible to separation. Several logistic regression models are specified, and exact D-optimal designs of fixed sizes are constructed for each model. Sets of simulated response data are generated to estimate the probability of separation in each design. This study proves through simulation that small-sample D-optimal designs are prone to separation and that separation risk is dependent on the specified model. Additionally, it is demonstrated that exact designs of equal size constructed for the same models may have significantly different chances of encountering separation.

The second portion of this research establishes an effective strategy for augmentation, where additional design runs are judiciously added to eliminate separation that has occurred in an initial design. A simulation study is used to demonstrate that augmenting runs in regions of maximum prediction variance (MPV), where the predicted probability of either response category is 50%, most reliably eliminates separation. However, it is also shown that MPV augmentation tends to yield augmented designs with lower D-efficiencies.

The final portion of this research proposes a novel compound optimality criterion, DMP, that is used to construct locally optimal and robust compromise designs. A two-phase coordinate exchange algorithm is implemented to construct exact locally DMP-optimal designs. To address design dependence issues, a maximin strategy is proposed for designating a robust DMP-optimal design. A case study demonstrates that the maximin DMP-optimal design maintains comparable D-efficiencies to a corresponding Bayesian D-optimal design while offering significantly improved separation performance.
ContributorsPark, Anson Robert (Author) / Montgomery, Douglas C. (Thesis advisor) / Mancenido, Michelle V (Thesis advisor) / Escobedo, Adolfo R. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2019
158883-Thumbnail Image.png
Description
Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all active effects. In this research, two-level nonregular screening designs with orthogonal main effects will be discussed. By utilizing knowledge of the alias structure, a design based model selection process for analyzing nonregular designs is proposed.

The Aliased Informed Model Selection (AIMS) strategy is a design specific approach that is compared to three generic model selection methods; stepwise regression, least absolute shrinkage and selection operator (LASSO), and the Dantzig selector. The AIMS approach substantially increases the power to detect active main effects and two-factor interactions versus the aforementioned generic methodologies. This research identifies design specific model spaces; sets of models with strong heredity, all estimable, and exhibit no model confounding. These spaces are then used in the AIMS method along with design specific aliasing rules for model selection decisions. Model spaces and alias rules are identified for three designs; 16-run no-confounding 6, 7, and 8-factor designs. The designs are demonstrated with several examples as well as simulations to show the AIMS superiority in model selection.

A final piece of the research provides a method for augmenting no-confounding designs based on a model spaces and maximum average D-efficiency. Several augmented designs are provided for different situations. A final simulation with the augmented designs shows strong results for augmenting four additional runs if time and resources permit.
ContributorsMetcalfe, Carly E (Author) / Montgomery, Douglas C. (Thesis advisor) / Jones, Bradley (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
158398-Thumbnail Image.png
Description
The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power

The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power rating. To quantify the reliability of such PV modules, the Accelerated Life Testing (ALT) plays an important role. But there are several obstacles that needs to be tackled to conduct such experiments, since there has not been enough historical field data available. Even if some time-series performance data of maximum output power (Pmax) is available, it may not be useful to develop failure/degradation mode-specific accelerated tests. This is because, to study the specific failure modes, it is essential to use failure mode-specific performance variable (like short circuit current, open circuit voltage or fill factor) that is directly affected by the failure mode, instead of overall power which would be affected by one or more of the performance variables. Hence, to address several of the above-mentioned issues, this research is divided into three phases. The first phase deals with developing models to study climate specific failure modes using failure mode specific parameters instead of power degradation. The limited field data collected after a long time (say 18-21 years), is utilized to model the degradation rate and the developed model is then calibrated to account for several unknown environmental effects using the available qualification testing data. The second phase discusses the cumulative damage modeling method to quantify the effects of various environmental variables on the overall power production of the photovoltaic module. Mainly, this cumulative degradation modeling approach is used to model the power degradation path and quantify the effects of high frequency multiple environmental input data (like temperature, humidity measured every minute or hour) with very sparse response data (power measurements taken quarterly or annually). The third phase deals with optimal planning and inference framework using Iterative-Accelerated Life Testing (I-ALT) methodology. All the proposed methodologies are demonstrated and validated using appropriate case studies.
ContributorsBala Subramaniyan, Arun (Author) / Pan, Rong (Thesis advisor) / Tamizhmani, Govindasamy (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Kuitche, Joseph (Committee member) / Arizona State University (Publisher)
Created2020
158514-Thumbnail Image.png
Description
In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible cost. Designing an optimal supply chain by optimizing supply chain operations and decisions is key to achieving these goals.

In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible cost. Designing an optimal supply chain by optimizing supply chain operations and decisions is key to achieving these goals.

In this research, a capacity planning and production scheduling mathematical model for a multi-facility and multiple product supply chain network with significant capital and labor costs is first proposed. This model considers the key levers of capacity configuration at production plants namely, shifts, run rate, down periods, finished goods inventory management and overtime. It suggests a minimum cost plan for meeting medium range demand forecasts that indicates production and inventory levels at plants by time period, the associated manpower plan and outbound shipments over the planning horizon. This dissertation then investigates two model extensions: production flexibility and pricing. In the first extension, the cost and benefits of investing in production flexibility is studied. In the second extension, product pricing decisions are added to the model for demand shaping taking into account price elasticity of demand.





The research develops methodologies to optimize supply chain operations by determining the optimal capacity plan and optimal flows of products among facilities based on a nonlinear mixed integer programming formulation. For large size real life cases the problem is intractable. An alternate formulation and an iterative heuristic algorithm are proposed and tested. The performance and bounds for the heuristic are evaluated. A real life case study in the automotive industry is considered for the implementation of the proposed models. The implementation results illustrate that the proposed method provides valuable insights for assisting the decision making process in the supply chain and provides significant improvement over current practice.
ContributorsAlmatooq, Nourah (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2020
158208-Thumbnail Image.png
Description
Functional regression models are widely considered in practice. To precisely understand an underlying functional mechanism, a good sampling schedule for collecting informative functional data is necessary, especially when data collection is limited. However, scarce research has been conducted on the optimal sampling schedule design for the functional regression model so

Functional regression models are widely considered in practice. To precisely understand an underlying functional mechanism, a good sampling schedule for collecting informative functional data is necessary, especially when data collection is limited. However, scarce research has been conducted on the optimal sampling schedule design for the functional regression model so far. To address this design issue, efficient approaches are proposed for generating the best sampling plan in the functional regression setting. First, three optimal experimental designs are considered under a function-on-function linear model: the schedule that maximizes the relative efficiency for recovering the predictor function, the schedule that maximizes the relative efficiency for predicting the response function, and the schedule that maximizes the mixture of the relative efficiencies of both the predictor and response functions. The obtained sampling plan allows a precise recovery of the predictor function and a precise prediction of the response function. The proposed approach can also be reduced to identify the optimal sampling plan for the problem with a scalar-on-function linear regression model. In addition, the optimality criterion on predicting a scalar response using a functional predictor is derived when the quadratic relationship between these two variables is present, and proofs of important properties of the derived optimality criterion are also provided. To find such designs, an algorithm that is comparably fast, and can generate nearly optimal designs is proposed. As the optimality criterion includes quantities that must be estimated from prior knowledge (e.g., a pilot study), the effectiveness of the suggested optimal design highly depends on the quality of the estimates. However, in many situations, the estimates are unreliable; thus, a bootstrap aggregating (bagging) approach is employed for enhancing the quality of estimates and for finding sampling schedules stable to the misspecification of estimates. Through case studies, it is demonstrated that the proposed designs outperform other designs in terms of accurately predicting the response and recovering the predictor. It is also proposed that bagging-enhanced design generates a more robust sampling design under the misspecification of estimated quantities.
ContributorsRha, Hyungmin (Author) / Kao, Ming-Hung (Thesis advisor) / Pan, Rong (Thesis advisor) / Stufken, John (Committee member) / Reiser, Mark R. (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2020
158154-Thumbnail Image.png
Description
Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely reliability information of the system under test. This dissertation develops methodologies for the ADT data modeling and analysis.

In the first part of this dissertation, ADT is introduced along with three major challenges in the ADT data analysis – modeling framework, inference method, and the need of analyzing multi-dimensional processes. To overcome these challenges, in the second part, a hierarchical approach, that leads to a nonlinear mixed-effects regression model, to modeling a univariate degradation process is developed. With this modeling framework, the issues of ignoring uncertainties in both data analysis and lifetime prediction, as presented by an International Standard Organization (ISO) standard, are resolved. In the third part, an approach to modeling a bivariate degradation process is addressed. It is developed using the copula theory that brings the benefits of both model flexibility and inference convenience. This approach is provided with an efficient Bayesian method for reliability evaluation. In the last part, an extension to a multivariate modeling framework is developed. Three fundamental copula classes are applied to model the complex dependence structure among correlated degradation processes. The advantages of the proposed modeling framework and the effect of ignoring tail dependence are demonstrated through simulation studies. The applications of the copula-based multivariate degradation models on both system reliability evaluation and remaining useful life prediction are provided.

In summary, this dissertation studies and explores the use of statistical methods in analyzing ADT data. All proposed methodologies are demonstrated by case studies.
ContributorsFANG, GUANQI (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Ju, Feng (Committee member) / Hong, Yili (Committee member) / Arizona State University (Publisher)
Created2020
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023