Matching Items (9)
Filtering by

Clear all filters

152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152902-Thumbnail Image.png
Description
Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of

Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of experiments due to the stress factor-level combinations resulting from the increased number of factors. Chapter 2 provides an approach for designing ALT plans with multiple stresses utilizing Latin hypercube designs that reduces the simulation cost without loss of statistical efficiency. A comparison to full grid and large-sample approximation methods illustrates the approach computational cost gain and flexibility in determining optimal stress settings with less assumptions and more intuitive unit allocations.

Implicit in the design criteria of current ALT designs is the assumption that the form of the acceleration model is correct. This is unrealistic assumption in many real-world problems. Chapter 3 provides an approach for ALT optimum design for model discrimination. We utilize the Hellinger distance measure between predictive distributions. The optimal ALT plan at three stress levels was determined and its performance was compared to good compromise plan, best traditional plan and well-known 4:2:1 compromise test plans. In the case of linear versus quadratic ALT models, the proposed method increased the test plan's ability to distinguish among competing models and provided better guidance as to which model is appropriate for the experiment.

Chapter 4 extends the approach of Chapter 3 to ALT sequential model discrimination. An initial experiment is conducted to provide maximum possible information with respect to model discrimination. The follow-on experiment is planned by leveraging the most current information to allow for Bayesian model comparison through posterior model probability ratios. Results showed that performance of plan is adversely impacted by the amount of censoring in the data, in the case of linear vs. quadratic model form at three levels of constant stress, sequential testing can improve model recovery rate by approximately 8% when data is complete, but no apparent advantage in adopting sequential testing was found in the case of right-censored data when censoring is in excess of a certain amount.
ContributorsNasir, Ehab (Author) / Pan, Rong (Thesis advisor) / Runger, George C. (Committee member) / Gel, Esma (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2014
150555-Thumbnail Image.png
Description
Supply chains are increasingly complex as companies branch out into newer products and markets. In many cases, multiple products with moderate differences in performance and price compete for the same unit of demand. Simultaneous occurrences of multiple scenarios (competitive, disruptive, regulatory, economic, etc.), coupled with business decisions (pricing, product introduction,

Supply chains are increasingly complex as companies branch out into newer products and markets. In many cases, multiple products with moderate differences in performance and price compete for the same unit of demand. Simultaneous occurrences of multiple scenarios (competitive, disruptive, regulatory, economic, etc.), coupled with business decisions (pricing, product introduction, etc.) can drastically change demand structures within a short period of time. Furthermore, product obsolescence and cannibalization are real concerns due to short product life cycles. Analytical tools that can handle this complexity are important to quantify the impact of business scenarios/decisions on supply chain performance. Traditional analysis methods struggle in this environment of large, complex datasets with hundreds of features becoming the norm in supply chains. We present an empirical analysis framework termed Scenario Trees that provides a novel representation for impulse and delayed scenario events and a direction for modeling multivariate constrained responses. Amongst potential learners, supervised learners and feature extraction strategies based on tree-based ensembles are employed to extract the most impactful scenarios and predict their outcome on metrics at different product hierarchies. These models are able to provide accurate predictions in modeling environments characterized by incomplete datasets due to product substitution, missing values, outliers, redundant features, mixed variables and nonlinear interaction effects. Graphical model summaries are generated to aid model understanding. Models in complex environments benefit from feature selection methods that extract non-redundant feature subsets from the data. Additional model simplification can be achieved by extracting specific levels/values that contribute to variable importance. We propose and evaluate new analytical methods to address this problem of feature value selection and study their comparative performance using simulated datasets. We show that supply chain surveillance can be structured as a feature value selection problem. For situations such as new product introduction, a bottom-up approach to scenario analysis is designed using an agent-based simulation and data mining framework. This simulation engine envelopes utility theory, discrete choice models and diffusion theory and acts as a test bed for enacting different business scenarios. We demonstrate the use of machine learning algorithms to analyze scenarios and generate graphical summaries to aid decision making.
ContributorsShinde, Amit (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Villalobos, Rene (Committee member) / Janakiram, Mani (Committee member) / Arizona State University (Publisher)
Created2012
154115-Thumbnail Image.png
Description
Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs.



First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA.



The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency.



The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.
ContributorsSaleh, Moein (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Runger, George C. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2015
135328-Thumbnail Image.png
Description
Millennials are the group of people that make up the newer generation of the world's population and they are constantly surrounded by technology, as well as known for having different values than the previous generations. Marketers have to adapt to newer ways to appeal to millennials and secure their loyalty

Millennials are the group of people that make up the newer generation of the world's population and they are constantly surrounded by technology, as well as known for having different values than the previous generations. Marketers have to adapt to newer ways to appeal to millennials and secure their loyalty since millennials are always on the lookout for the next best thing and will "trade up for brands that matter, but trade down when brand value is weak", it poses a challenge for the marketing departments of companies (Fromm, J. & Parks, J.). The airline industry is one of the fastest growing sectors as "the total number of people flying on U.S. airlines will increase from 745.5 million in 2014 and grow to 1.15 billion in 2034," which shows that airlines have a wider population to market to, and will need to improve their marketing strategies to differentiate from competitors (Power). The financial sector also has a difficult time reaching out to millennials because "millennials are hesitant to take financial risks," as well as downing in college debt, while not making as much money as previous generations (Fromm, J. & Parks, J.). By looking into the marketing strategies, specifically using social media platforms, of the two industries, an understanding can be gathered of what millennials are attracted to. Along with looking at the marketing strategies of financial and airline industries, I looked at the perspectives of these industries in different countries, which is important to look at because then we can see if the values of millennials vary across different cultures. Countries chosen for research to further examine their cultural differences in terms of marketing practices are the United States and England. The main form of marketing that was used for this research were social media accounts of the companies, and seeing how they used the social networking platforms to reach and engage with their consumers, especially with those of the millennial generation. The companies chosen for further research for the airline industry from England were British Airways, EasyJet, and Virgin Atlantic, while for the U.S. Delta Airlines, Inc., Southwest Airlines, and United were chosen. The companies chosen to further examine within the finance industry from England include Barclay's, HSBC, and Lloyd's Bank, while for the U.S. the banks selected were Bank of America, JPMorgan Chase, and Wells Fargo. The companies for this study were chosen because they are among the top five in their industry, as well as all companies that I have had previous interactions with. It was meant to see what the companies at the top of the industry were doing that set them apart from their competitors in terms of social media marketing content and see if there were features they lacked that could be changed or improvements they could make. A survey was also conducted to get a better idea of the attitudes and behaviors of millennials when it comes to the airline and finance industries, as well as towards social media marketing practices.
ContributorsPathak, Krisha Hemanshu (Author) / Kumar, Ajith (Thesis director) / Arora, Hina (Committee member) / W. P. Carey School of Business (Contributor) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / Hugh Downs School of Human Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134871-Thumbnail Image.png
Description
This thesis, through a thorough literature and content review, discusses the various ways that data analytics and supply chain management intersect. Both fields have been around for a while, but are incredibly aided by the information age we live in today. Today's ERP systems and supply chain software packages use

This thesis, through a thorough literature and content review, discusses the various ways that data analytics and supply chain management intersect. Both fields have been around for a while, but are incredibly aided by the information age we live in today. Today's ERP systems and supply chain software packages use advanced analytic techniques and algorithms to optimize every aspect of supply chain management. This includes aspects like inventory optimization, portfolio management, network design, production scheduling, fleet planning, supplier evaluation, and others. The benefit of these analytic techniques is a reduction in costs as well as an improvement in overall supply chain performance and efficiencies. The paper begins with a short historical context on business analytics and optimization then moves on to the impact and application of analytics in the supply chain today. Following that the implications of big data are explored, along with how a company might begin to take advantage of big data and what challenges a firm may face along the way. The current tools used by supply chain professionals are then discussed. There is then a section on the most up and coming technologies; the internet of things, blockchain technology, additive manufacturing (3D printing), and machine learning; and how those technologies may further enable the successful use of analytics to improve supply chain management. Companies that do take advantage of analytics in their supply chains are sure to maintain a competitive advantage over those firms that fail to do so.
ContributorsCotton, Ryan Aaron (Author) / Taylor, Todd (Thesis director) / Arora, Hina (Committee member) / Department of Information Systems (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
154894-Thumbnail Image.png
Description
The majority of research in experimental design has, to date, been focused on designs when there is only one type of response variable under consideration. In a decision-making process, however, relying on only one objective or criterion can lead to oversimplified, sub-optimal decisions that ignore important considerations. Incorporating multiple, and

The majority of research in experimental design has, to date, been focused on designs when there is only one type of response variable under consideration. In a decision-making process, however, relying on only one objective or criterion can lead to oversimplified, sub-optimal decisions that ignore important considerations. Incorporating multiple, and likely competing, objectives is critical during the decision-making process in order to balance the tradeoffs of all potential solutions. Consequently, the problem of constructing a design for an experiment when multiple types of responses are of interest does not have a clear answer, particularly when the response variables have different distributions. Responses with different distributions have different requirements of the design.

Computer-generated optimal designs are popular design choices for less standard scenarios where classical designs are not ideal. This work presents a new approach to experimental designs for dual-response systems. The normal, binomial, and Poisson distributions are considered for the potential responses. Using the D-criterion for the linear model and the Bayesian D-criterion for the nonlinear models, a weighted criterion is implemented in a coordinate-exchange algorithm. The designs are evaluated and compared across different weights. The sensitivity of the designs to the priors supplied in the Bayesian D-criterion is explored in the third chapter of this work.

The final section of this work presents a method for a decision-making process involving multiple objectives. There are situations where a decision-maker is interested in several optimal solutions, not just one. These types of decision processes fall into one of two scenarios: 1) wanting to identify the best N solutions to accomplish a goal or specific task, or 2) evaluating a decision based on several primary quantitative objectives along with secondary qualitative priorities. Design of experiment selection often involves the second scenario where the goal is to identify several contending solutions using the primary quantitative objectives, and then use the secondary qualitative objectives to guide the final decision. Layered Pareto Fronts can help identify a richer class of contenders to examine more closely. The method is illustrated with a supersaturated screening design example.
ContributorsBurke, Sarah Ellen (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Anderson-Cook, Christine M. (Committee member) / Pan, Rong (Committee member) / Silvestrini, Rachel (Committee member) / Arizona State University (Publisher)
Created2016
149476-Thumbnail Image.png
Description
In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to

In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to change or cannot be controlled under normal operating conditions. These situations often prohibit a complete randomization for the experimental runs due to practical and economical considerations. Furthermore, the process variables can be categorized into two types: variables that are controllable and directly affect the response, and variables that are uncontrollable and primarily affect the variability of the response. These uncontrollable variables are called noise factors and assumed controllable in a laboratory environment for the purpose of conducting experiments. The model containing both noise variables and control factors can be used to determine factor settings for the control factor that makes the response "robust" to the variability transmitted from the noise factors. These types of experiments can be analyzed in a model for the mean response and a model for the slope of the response within a split-plot structure. When considering the experimental designs, low prediction variances for the mean and slope model are desirable. The methods for the mixture-process variable designs with noise variables considering a restricted randomization are demonstrated and some mixture-process variable designs that are robust to the coefficients of interaction with noise variables are evaluated using fraction design space plots with the respect to the prediction variance properties. Finally, the G-optimal design that minimizes the maximum prediction variance over the entire design region is created using a genetic algorithm.
ContributorsCho, Tae Yeon (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Shunk, Dan L. (Committee member) / Gel, Esma S (Committee member) / Kulahci, Murat (Committee member) / Arizona State University (Publisher)
Created2010
157561-Thumbnail Image.png
Description
Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive

Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive (or negative) outcome. This research investigates and proposes alternative designs to alleviate the problem of separation in small-sample D-optimal designs for the logistic regression model. Separation causes the non-existence of maximum likelihood parameter estimates and presents a serious problem for model fitting purposes.

First, it is shown that exact, multi-factor D-optimal designs for the logistic regression model can be susceptible to separation. Several logistic regression models are specified, and exact D-optimal designs of fixed sizes are constructed for each model. Sets of simulated response data are generated to estimate the probability of separation in each design. This study proves through simulation that small-sample D-optimal designs are prone to separation and that separation risk is dependent on the specified model. Additionally, it is demonstrated that exact designs of equal size constructed for the same models may have significantly different chances of encountering separation.

The second portion of this research establishes an effective strategy for augmentation, where additional design runs are judiciously added to eliminate separation that has occurred in an initial design. A simulation study is used to demonstrate that augmenting runs in regions of maximum prediction variance (MPV), where the predicted probability of either response category is 50%, most reliably eliminates separation. However, it is also shown that MPV augmentation tends to yield augmented designs with lower D-efficiencies.

The final portion of this research proposes a novel compound optimality criterion, DMP, that is used to construct locally optimal and robust compromise designs. A two-phase coordinate exchange algorithm is implemented to construct exact locally DMP-optimal designs. To address design dependence issues, a maximin strategy is proposed for designating a robust DMP-optimal design. A case study demonstrates that the maximin DMP-optimal design maintains comparable D-efficiencies to a corresponding Bayesian D-optimal design while offering significantly improved separation performance.
ContributorsPark, Anson Robert (Author) / Montgomery, Douglas C. (Thesis advisor) / Mancenido, Michelle V (Thesis advisor) / Escobedo, Adolfo R. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2019