Matching Items (207)
150555-Thumbnail Image.png
Description
Supply chains are increasingly complex as companies branch out into newer products and markets. In many cases, multiple products with moderate differences in performance and price compete for the same unit of demand. Simultaneous occurrences of multiple scenarios (competitive, disruptive, regulatory, economic, etc.), coupled with business decisions (pricing, product introduction,

Supply chains are increasingly complex as companies branch out into newer products and markets. In many cases, multiple products with moderate differences in performance and price compete for the same unit of demand. Simultaneous occurrences of multiple scenarios (competitive, disruptive, regulatory, economic, etc.), coupled with business decisions (pricing, product introduction, etc.) can drastically change demand structures within a short period of time. Furthermore, product obsolescence and cannibalization are real concerns due to short product life cycles. Analytical tools that can handle this complexity are important to quantify the impact of business scenarios/decisions on supply chain performance. Traditional analysis methods struggle in this environment of large, complex datasets with hundreds of features becoming the norm in supply chains. We present an empirical analysis framework termed Scenario Trees that provides a novel representation for impulse and delayed scenario events and a direction for modeling multivariate constrained responses. Amongst potential learners, supervised learners and feature extraction strategies based on tree-based ensembles are employed to extract the most impactful scenarios and predict their outcome on metrics at different product hierarchies. These models are able to provide accurate predictions in modeling environments characterized by incomplete datasets due to product substitution, missing values, outliers, redundant features, mixed variables and nonlinear interaction effects. Graphical model summaries are generated to aid model understanding. Models in complex environments benefit from feature selection methods that extract non-redundant feature subsets from the data. Additional model simplification can be achieved by extracting specific levels/values that contribute to variable importance. We propose and evaluate new analytical methods to address this problem of feature value selection and study their comparative performance using simulated datasets. We show that supply chain surveillance can be structured as a feature value selection problem. For situations such as new product introduction, a bottom-up approach to scenario analysis is designed using an agent-based simulation and data mining framework. This simulation engine envelopes utility theory, discrete choice models and diffusion theory and acts as a test bed for enacting different business scenarios. We demonstrate the use of machine learning algorithms to analyze scenarios and generate graphical summaries to aid decision making.
ContributorsShinde, Amit (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Villalobos, Rene (Committee member) / Janakiram, Mani (Committee member) / Arizona State University (Publisher)
Created2012
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151203-Thumbnail Image.png
Description
This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.
ContributorsAbelson, Richard (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Shunk, Dan (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
151157-Thumbnail Image.png
Description
Within the vast area of study in Organizational Change lays the industrial application of Change Management, which includes the understanding of both resisters and facilitators to organizational change. This dissertation presents an approach of gauging levels of change as it relates to both external and internal organization factors. The arena

Within the vast area of study in Organizational Change lays the industrial application of Change Management, which includes the understanding of both resisters and facilitators to organizational change. This dissertation presents an approach of gauging levels of change as it relates to both external and internal organization factors. The arena of such a test is given through the introduction of the same initiative change model, which attempts to improve transparency and accountability, across six different organizations where the varying results of change are measured. The change model itself consists of an interdisciplinary approach which emphasizes education of advanced organizational measurement techniques as fundamental drivers of converging change. The observations are documented in the real-time observed cased studies of six organizations as they progressed through the change process. This research also introduces a scaled metric for determining preliminary levels of change and endeavors to test both internal and external, or environmental, factors of change. A key contribution to the work is the analysis between both observed and surveyed data where a grounded theory analysis is used to help answer the question of what are factors of change in organizations. This work is considered to be foundational in real-time observational studies but has a promise for future additional contributions which would further elaborate on the phenomenon of prescribed organizational change.
ContributorsStone, Brian (Author) / Sullivan, Kenneth T. (Thesis advisor) / Verdini, William (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2012
149613-Thumbnail Image.png
Description
Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is

Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is essential to identifying processing issues, improving quality, and meeting customer demand in the industry. However, the complicated fabrication process, the massive amount of data collected, and the number of models available make yield modeling a complex and challenging task. This work presents modeling strategies to forecast yield using generalized linear models (GLMs) based on defect metrology data. The research is divided into three main parts. First, the data integration and aggregation necessary for model building are described, and GLMs are constructed for yield forecasting. This technique yields results at both the die and the wafer levels, outperforms existing models found in the literature based on prediction errors, and identifies significant factors that can drive process improvement. This method also allows the nested structure of the process to be considered in the model, improving predictive capabilities and violating fewer assumptions. To account for the random sampling typically used in fabrication, the work is extended by using generalized linear mixed models (GLMMs) and a larger dataset to show the differences between batch-specific and population-averaged models in this application and how they compare to GLMs. These results show some additional improvements in forecasting abilities under certain conditions and show the differences between the significant effects identified in the GLM and GLMM models. The effects of link functions and sample size are also examined at the die and wafer levels. The third part of this research describes a methodology for integrating classification and regression trees (CART) with GLMs. This technique uses the terminal nodes identified in the classification tree to add predictors to a GLM. This method enables the model to consider important interaction terms in a simpler way than with the GLM alone, and provides valuable insight into the fabrication process through the combination of the tree structure and the statistical analysis of the GLM.
ContributorsKrueger, Dana Cheree (Author) / Montgomery, Douglas C. (Thesis advisor) / Fowler, John (Committee member) / Pan, Rong (Committee member) / Pfund, Michele (Committee member) / Arizona State University (Publisher)
Created2011
149367-Thumbnail Image.png
Description
There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest. In general, the use of normal approximation to the Poisson

There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest. In general, the use of normal approximation to the Poisson distribution seems to be justified when the Poisson means are large enough. A new two-sided Multivariate Poisson Exponentially Weighted Moving Average (MPEWMA) control chart is proposed, and the control limits are directly derived from the multivariate Poisson distribution. The MPEWMA and the conventional Multivariate Exponentially Weighted Moving Average (MEWMA) charts are evaluated by using the multivariate Poisson framework. The MPEWMA chart outperforms the MEWMA with the normal-theory limits in terms of the in-control average run lengths. An extension study of the two-sided MPEWMA to a one-sided version is performed; this is useful for detecting an increase in the count means. The results of comparison with the one-sided MEWMA chart are quite similar to the two-sided case. The implementation of the MPEWMA scheme for multiple count data is illustrated, with step by step guidelines and several examples. In addition, the method is compared to other model-based control charts that are used to monitor the residual values such as the regression adjustment. The MPEWMA scheme shows better performance on detecting the mean shift in count data when positive correlation exists among all variables.
ContributorsLaungrungrong, Busaba (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Fowler, John (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2010
149475-Thumbnail Image.png
Description
The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on one or more explanatory, or independent, variables. So, instead of

The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on one or more explanatory, or independent, variables. So, instead of observing a single measurement on each unit or product a set of values is obtained over a range which, when plotted, takes the shape of a curve. Traditional multivariate monitoring schemes are inadequate for monitoring profiles due to high dimensionality and poor use of the information stored in functional form leading to very large variance-covariance matrices. Profile monitoring has become an important area of study in statistical process control and is being actively addressed by researchers across the globe. This research explores the understanding of the area in three parts. A comparative analysis is conducted of two linear profile-monitoring techniques based on probability of false alarm rate and average run length (ARL) under shifts in the model parameters. The two techniques studied are control chart based on classical calibration statistic and a control chart based on the parameters of a linear model. The research demonstrates that a profile characterized by a parametric model is more efficient monitoring scheme than one based on monitoring only the individual features of the profile. A likelihood ratio based changepoint control chart is proposed for detecting a sustained step shift in low order polynomial profiles. The test statistic is plotted on a Shewhart like chart with control limits derived from asymptotic distribution theory. The statistic is factored to reflect the variation due to the parameters in to aid in interpreting an out of control signal. The research also looks at the robust parameter design study of profiles, also referred to as signal response systems. Such experiments are often necessary for understanding and reducing the common cause variation in systems. A split-plot approach is proposed to analyze the profiles. It is demonstrated that an explicit modeling of variance components using generalized linear mixed models approach has more precise point estimates and tighter confidence intervals.
ContributorsGupta, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Fowler, John (Committee member) / Prewitt, Kathy (Committee member) / Kulahci, Murat (Committee member) / Arizona State University (Publisher)
Created2010
149476-Thumbnail Image.png
Description
In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to

In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to change or cannot be controlled under normal operating conditions. These situations often prohibit a complete randomization for the experimental runs due to practical and economical considerations. Furthermore, the process variables can be categorized into two types: variables that are controllable and directly affect the response, and variables that are uncontrollable and primarily affect the variability of the response. These uncontrollable variables are called noise factors and assumed controllable in a laboratory environment for the purpose of conducting experiments. The model containing both noise variables and control factors can be used to determine factor settings for the control factor that makes the response "robust" to the variability transmitted from the noise factors. These types of experiments can be analyzed in a model for the mean response and a model for the slope of the response within a split-plot structure. When considering the experimental designs, low prediction variances for the mean and slope model are desirable. The methods for the mixture-process variable designs with noise variables considering a restricted randomization are demonstrated and some mixture-process variable designs that are robust to the coefficients of interaction with noise variables are evaluated using fraction design space plots with the respect to the prediction variance properties. Finally, the G-optimal design that minimizes the maximum prediction variance over the entire design region is created using a genetic algorithm.
ContributorsCho, Tae Yeon (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Shunk, Dan L. (Committee member) / Gel, Esma S (Committee member) / Kulahci, Murat (Committee member) / Arizona State University (Publisher)
Created2010
149443-Thumbnail Image.png
Description
Public health surveillance is a special case of the general problem where counts (or rates) of events are monitored for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change

Public health surveillance is a special case of the general problem where counts (or rates) of events are monitored for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change that only occurs within a region, initially unspecified, defined by these covariates. Current methods are typically limited to spatial and/or temporal covariate information and often fail to use all the information available in modern data that can be paramount in unveiling these subtle changes. Additional complexities associated with modern health data that are often not accounted for by traditional methods include: covariates of mixed type, missing values, and high-order interactions among covariates. This work proposes a transform of public health surveillance to supervised learning, so that an appropriate learner can inherently address all the complexities described previously. At the same time, quantitative measures from the learner can be used to define signal criteria to detect changes in rates of events. A Feature Selection (FS) method is used to identify covariates that contribute to a model and to generate a signal. A measure of statistical significance is included to control false alarms. An alternative Percentile method identifies the specific cases that lead to changes using class probability estimates from tree-based ensembles. This second method is intended to be less computationally intensive and significantly simpler to implement. Finally, a third method labeled Rule-Based Feature Value Selection (RBFVS) is proposed for identifying the specific regions in high-dimensional space where the changes are occurring. Results on simulated examples are used to compare the FS method and the Percentile method. Note this work emphasizes the application of the proposed methods on public health surveillance. Nonetheless, these methods can easily be extended to a variety of applications where counts (or rates) of events are monitored for changes. Such problems commonly occur in domains such as manufacturing, economics, environmental systems, engineering, as well as in public health.
ContributorsDavila, Saylisse (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Young, Dennis (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2010
132872-Thumbnail Image.png
Description
This thesis outlines the hand-held memory characterization testing system that is to be created into a PCB (printed circuit board). The circuit is designed to apply voltages diagonally through a RRAM cell (32x32 memory array). The purpose of this sweep across the RRAM is to measure and calculate the high

This thesis outlines the hand-held memory characterization testing system that is to be created into a PCB (printed circuit board). The circuit is designed to apply voltages diagonally through a RRAM cell (32x32 memory array). The purpose of this sweep across the RRAM is to measure and calculate the high and low resistance state value over a specified amount of testing cycles. With each cell having a unique output of high and low resistance states a unique characterization of each RRAM cell is able to be developed. Once the memory is characterized, the specific RRAM cell that was tested is then able to be used in a varying amount of applications for different things based on its uniqueness. Due to an inability to procure a packaged RRAM cell, a Mock-RRAM was instead designed in order to emulate the same behavior found in a RRAM cell.
The final testing circuit and Mock-RRAM are varied and complex but come together to be able to produce a measured value of the high resistance and low resistance state. This is done by the Arduino autonomously digitizing the anode voltage, cathode voltage, and output voltage. A ramp voltage that sweeps from 1V to -1V is applied to the Mock-RRAM acting as an input. This ramp voltage is then later defined as the anode voltage which is just one of the two nodes connected to the Mock-RRAM. The cathode voltage is defined as the other node at which the voltage drops across the Mock-RRAM. Using these three voltages as input to the Arduino, the Mock-RRAM path resistance is able to be calculated at any given point in time. Conducting many test cycles and calculating the high and low resistance values allows for a graph to be developed of the chaotic variation of resistance state values over time. This chaotic variation can then be analyzed further in the future in order to better predict trends and characterize the RRAM cell that was tested.
Furthermore, the interchangeability of many devices on the PCB allows for the testing system to do more in the future. Ports have been added to the final PCB in order to connect a packaged RRAM cell. This will allow for the characterization of a real RRAM memory cell later down the line rather than a Mock-RRAM as emulation. Due to the autonomous testing, very few human intervention is needed which makes this board a great baseline for others in the future looking to add to it and collect larger pools of data.
ContributorsDobrin, Ryan Christopher (Co-author) / Halden, Matthew (Co-author) / Hall, Tanner (Co-author) / Barnaby, Hugh (Thesis director) / Kitchen, Jennifer (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05