Matching Items (11)
Filtering by

Clear all filters

152087-Thumbnail Image.png
Description
Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In this dissertation, orthogonal arrays were evaluated with many popular design-ranking criteria in order to identify optimal 20-run and 24-run no-confounding designs. Monte Carlo simulation was used to empirically assess the model fitting effectiveness of the recommended no-confounding designs. The results of the simulation demonstrated that these new designs, particularly the 24-run designs, are successful at detecting active effects over 95% of the time given sufficient model effect sparsity. The final chapter presents a screening design selection methodology, based on decision trees, to aid in the selection of a screening design from a list of published options. The methodology determines which of a candidate set of screening designs has the lowest expected experimental cost.
ContributorsStone, Brian (Author) / Montgomery, Douglas C. (Thesis advisor) / Silvestrini, Rachel T. (Committee member) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
152015-Thumbnail Image.png
Description
This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal

This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal designs traditionally perform very well in terms of model fitting, particularly when a polynomial is intended, but can result in problematic replication in the case of insignificant factors. By bringing these two design types together, positive properties of each can be retained while mitigating potential weaknesses. Hybrid space-filling designs, generated as Latin hypercubes augmented with I-optimal points, are compared to designs of each contributing component. A second design type called a bridge design is also evaluated, which further integrates the disparate design types. Bridge designs are the result of a Latin hypercube undergoing coordinate exchange to reach constrained D-optimality, ensuring that there is zero replication of factors in any one-dimensional projection. Lastly, bridge designs were augmented with I-optimal points with two goals in mind. Augmentation with candidate points generated assuming the same underlying analysis model serves to reduce the prediction variance without greatly compromising the space-filling property of the design, while augmentation with candidate points generated assuming a different underlying analysis model can greatly reduce the impact of model misspecification during the design phase. Each of these composite designs are compared to pure space-filling and optimal designs. They typically out-perform pure space-filling designs in terms of prediction variance and alphabetic efficiency, while maintaining comparability with pure optimal designs at small sample size. This justifies them as excellent candidates for initial experimentation.
ContributorsKennedy, Kathryn (Author) / Montgomery, Douglas C. (Thesis advisor) / Johnson, Rachel T. (Thesis advisor) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
153053-Thumbnail Image.png
Description
No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation of main effects and a few two-factor interactions without the need for follow-up experimentation. Analysis methods for non-regular designs is an area of ongoing research, because standard variable selection techniques such as stepwise regression may not always be the best approach. The current work investigates the use of the Dantzig selector for analyzing no-confounding designs. Through a series of examples it shows that this technique is very effective for identifying the set of active factors in no-confounding designs when there are three of four active main effects and up to two active two-factor interactions.

To evaluate the performance of Dantzig selector, a simulation study was conducted and the results based on the percentage of type II errors are analyzed. Also, another alternative for 6 factor NC design, called the Alternate No-confounding design in six factors is introduced in this study. The performance of this Alternate NC design in 6 factors is then evaluated by using Dantzig selector as an analysis method. Lastly, a section is dedicated to comparing the performance of NC-6 and Alternate NC-6 designs.
ContributorsKrishnamoorthy, Archana (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2014
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151263-Thumbnail Image.png
Description
Alternative energy technologies must become more cost effective to achieve grid parity with fossil fuels. Dye sensitized solar cells (DSSCs) are an innovative third generation photovoltaic technology, which is demonstrating tremendous potential to become a revolutionary technology due to recent breakthroughs in cost of fabrication. The study here focused on

Alternative energy technologies must become more cost effective to achieve grid parity with fossil fuels. Dye sensitized solar cells (DSSCs) are an innovative third generation photovoltaic technology, which is demonstrating tremendous potential to become a revolutionary technology due to recent breakthroughs in cost of fabrication. The study here focused on quality improvement measures undertaken to improve fabrication of DSSCs and enhance process efficiency and effectiveness. Several quality improvement methods were implemented to optimize the seven step individual DSSC fabrication processes. Lean Manufacturing's 5S method successfully increased efficiency in all of the processes. Six Sigma's DMAIC methodology was used to identify and eliminate each of the root causes of defects in the critical titanium dioxide deposition process. These optimizations resulted with the following significant improvements in the production process: 1. fabrication time of the DSSCs was reduced by 54 %; 2. fabrication procedures were improved to the extent that all critical defects in the process were eliminated; 3. the quantity of functioning DSSCs fabricated was increased from 17 % to 90 %.
ContributorsFauss, Brian (Author) / Munukutla, Lakshmi V. (Thesis advisor) / Polesky, Gerald (Committee member) / Madakannan, Arunachalanadar (Committee member) / Arizona State University (Publisher)
Created2012
154115-Thumbnail Image.png
Description
Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs.



First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA.



The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency.



The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.
ContributorsSaleh, Moein (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Runger, George C. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2015
155687-Thumbnail Image.png
Description
Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore very crucial for these industries to be able to predict cycle time very accurately in order to quote accurate delivery

Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore very crucial for these industries to be able to predict cycle time very accurately in order to quote accurate delivery dates. Discrete Event Simulation (DES) models are often used to model these complex manufacturing systems in order to generate estimates of the cycle time distribution. However, building models and executing them consumes sufficient time and resources. The objective of this research is to determine the influence of input parameters on the cycle time distribution of a semiconductor or high volume electronics manufacturing system. This will help the decision makers to implement system changes to improve the predictability of their cycle time distribution without having to run simulation models. In order to understand how input parameters impact the cycle time, Design of Experiments (DOE) is performed. The response variables considered are the attributes of cycle time distribution which include the four moments and percentiles. The input to this DOE is the output from the simulation runs. Main effects, two-way and three-way interactions for these input variables are analyzed. The implications of these results to real world scenarios are explained which would help manufactures understand the effects of the interactions between the input factors on the estimates of cycle time distribution. The shape of the cycle time distributions is different for different types of systems. Also, DES requires substantial resources and time to run. In an effort to generalize the results obtained in semiconductor manufacturing analysis, a non- complex system is considered.
ContributorsSalvi, Tanushree Ashutosh (Author) / Bekki, Jennifer M (Thesis advisor) / Sodemann, Angela (Thesis advisor) / Shuaib, Abdelrahman (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2017
155453-Thumbnail Image.png
Description
The purpose of this paper is to present a case study on the application of the Lean Six Sigma (LSS) quality improvement methodology and tools to study the analysis and improvement of facilities management (FM) services at a healthcare organization. Research literature was reviewed concerning whether or not LSS has

The purpose of this paper is to present a case study on the application of the Lean Six Sigma (LSS) quality improvement methodology and tools to study the analysis and improvement of facilities management (FM) services at a healthcare organization. Research literature was reviewed concerning whether or not LSS has been applied in healthcare-based FM, but no such studies have been published. This paper aims to address the lack of an applicable methodology for LSS intervention within the context of healthcare-based FM. The Define, Measure, Analyze, Improve, and Control (DMAIC) framework was followed to test the hypothesis that LSS can improve the service provided by an FM department responsible for the maintenance and repair of furniture and finishes at a large healthcare organization in the southwest United States of America. Quality improvement curricula and resources offered by the case study organization equipped the FM department to apply LSS over the course of a five-month period. Qualitative data were gathered from pre- and post-intervention surveys while quantitative data were gathered with the Organization’s computerized maintenance management system (CMMS) software. Overall, LSS application proved to be useful for the intended purpose. The author proposes that application of LSS by other FM departments to improve their services could also be successful, which is noteworthy and deserving of continued research.
ContributorsShirey, William T (Author) / Sullivan, Kenneth (Thesis advisor) / Smithwick, Jake (Committee member) / Lines, Brian (Committee member) / Arizona State University (Publisher)
Created2017
157755-Thumbnail Image.png
Description
Developing countries suffer from various health challenges due to inaccessible medical diagnostic laboratories and lack of resources to establish new laboratories. One way to address these issues is to develop diagnostic systems that are suitable for the low-resource setting. In addition to this, applications requiring rapid analyses further motivates the

Developing countries suffer from various health challenges due to inaccessible medical diagnostic laboratories and lack of resources to establish new laboratories. One way to address these issues is to develop diagnostic systems that are suitable for the low-resource setting. In addition to this, applications requiring rapid analyses further motivates the development of portable, easy-to-use, and accurate Point of Care (POC) diagnostics. Lateral Flow Immunoassays (LFIAs) are among the most successful POC tests as they satisfy most of the ASSURED criteria. However, factors like reagent stability, reaction rates limit the performance and robustness of LFIAs. The fluid flow rate in LFIA significantly affect the factors mentioned above, and hence, it is desirable to maintain an optimal fluid velocity in porous media.

The main objective of this study is to build a statistical model that enables us to determine the optimal design parameters and ambient conditions for achieving a desired fluid velocity in porous media. This study mainly focuses on the effects of relative humidity and temperature on evaporation in porous media and the impact of geometry on fluid velocity in LFIAs. A set of finite element analyses were performed, and the obtained simulation results were then experimentally verified using Whatman filter paper with different geometry under varying ambient conditions. Design of experiments was conducted to estimate the significant factors affecting the fluid flow rate.

Literature suggests that liquid evaporation is one of the major factors that inhibit fluid penetration and capillary flow in lateral flow Immunoassays. The obtained results closely align with the existing literature and conclude that a desired fluid flow rate can be achieved by tuning the geometry of the porous media. The derived statistical model suggests that a dry and warm atmosphere is expected to inhibit the fluid flow rate the most and vice-versa.
ContributorsThamatam, Nipun (Author) / Christen, Jennifer Blain (Thesis advisor) / Goryll, Michael (Committee member) / Thornton, Trevor (Committee member) / Arizona State University (Publisher)
Created2019
157561-Thumbnail Image.png
Description
Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive

Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive (or negative) outcome. This research investigates and proposes alternative designs to alleviate the problem of separation in small-sample D-optimal designs for the logistic regression model. Separation causes the non-existence of maximum likelihood parameter estimates and presents a serious problem for model fitting purposes.

First, it is shown that exact, multi-factor D-optimal designs for the logistic regression model can be susceptible to separation. Several logistic regression models are specified, and exact D-optimal designs of fixed sizes are constructed for each model. Sets of simulated response data are generated to estimate the probability of separation in each design. This study proves through simulation that small-sample D-optimal designs are prone to separation and that separation risk is dependent on the specified model. Additionally, it is demonstrated that exact designs of equal size constructed for the same models may have significantly different chances of encountering separation.

The second portion of this research establishes an effective strategy for augmentation, where additional design runs are judiciously added to eliminate separation that has occurred in an initial design. A simulation study is used to demonstrate that augmenting runs in regions of maximum prediction variance (MPV), where the predicted probability of either response category is 50%, most reliably eliminates separation. However, it is also shown that MPV augmentation tends to yield augmented designs with lower D-efficiencies.

The final portion of this research proposes a novel compound optimality criterion, DMP, that is used to construct locally optimal and robust compromise designs. A two-phase coordinate exchange algorithm is implemented to construct exact locally DMP-optimal designs. To address design dependence issues, a maximin strategy is proposed for designating a robust DMP-optimal design. A case study demonstrates that the maximin DMP-optimal design maintains comparable D-efficiencies to a corresponding Bayesian D-optimal design while offering significantly improved separation performance.
ContributorsPark, Anson Robert (Author) / Montgomery, Douglas C. (Thesis advisor) / Mancenido, Michelle V (Thesis advisor) / Escobedo, Adolfo R. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2019