Matching Items (4)
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
Description
During the summer of 2016 I had an internship in the Fab Materials Planning group (FMP) at Intel Corporation. FMP generates long-range (6-24 months) forecasts for chemical and gas materials used in the chip fabrication process. These forecasts are sent to Commodity Mangers (CMs) in a separate department where they

During the summer of 2016 I had an internship in the Fab Materials Planning group (FMP) at Intel Corporation. FMP generates long-range (6-24 months) forecasts for chemical and gas materials used in the chip fabrication process. These forecasts are sent to Commodity Mangers (CMs) in a separate department where they communicate the forecast and any constraints to Intel suppliers. The intern manager of the group, Scott Keithley, created a prototype of a model to redefine how FMP determines which materials require a forecast update (forecasting cadence). However, the model prototype was complex to use, not intuitive, and did not receive positive feedback from the rest of the team or external stakeholders. This thesis will detail the steps I took in identifying the main problem the model was intended to address, how I approached the problem, and some of the major iterations I took to modify the model. It will also go over the final model dashboard and the results of the model use and integration. An improvement analysis and the intended and unintended consequences of the model will also be included. The results of this model demonstrate that statistical process control, a traditionally operational analysis, can be used to generate a forecasting cadence. It will also verify that an intuitive user interface is vital to the end user adoption and integration of an analytics based model into an established process flow. This model will generate an estimated time savings of 900 hours per year as well as giving FMP the ability to be more proactive in its forecasting approach.
ContributorsMatson, Rilee Nicole (Author) / Kellso, James (Thesis director) / Keithley, Scott (Committee member) / Department of Supply Chain Management (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
153604-Thumbnail Image.png
Description
The complexity of supply chains (SC) has grown rapidly in recent years, resulting in an increased difficulty to evaluate and visualize performance. Consequently, analytical approaches to evaluate SC performance in near real time relative to targets and plans are important to detect and react to deviations in order to prevent

The complexity of supply chains (SC) has grown rapidly in recent years, resulting in an increased difficulty to evaluate and visualize performance. Consequently, analytical approaches to evaluate SC performance in near real time relative to targets and plans are important to detect and react to deviations in order to prevent major disruptions.

Manufacturing anomalies, inaccurate forecasts, and other problems can lead to SC disruptions. Traditional monitoring methods are not sufficient in this respect, because com- plex SCs feature changes in manufacturing tasks (dynamic complexity) and carry a large number of stock keeping units (detail complexity). Problems are easily confounded with normal system variations.

Motivated by these real challenges faced by modern SC, new surveillance solutions are proposed to detect system deviations that could lead to disruptions in a complex SC. To address supply-side deviations, the fitness of different statistics that can be extracted from the enterprise resource planning system is evaluated. A monitoring strategy is first proposed for SCs featuring high levels of dynamic complexity. This presents an opportunity for monitoring methods to be applied in a new, rich domain of SC management. Then a monitoring strategy, called Heat Map Contrasts (HMC), which converts monitoring into a series of classification problems, is used to monitor SCs with both high levels of dynamic and detail complexities. Data from a semiconductor SC simulator are used to compare the methods with other alternatives under various failure cases, and the results illustrate the viability of our methods.

To address demand-side deviations, a new method of quantifying forecast uncer- tainties using the progression of forecast updates is presented. It is illustrated that a rich amount of information is available in rolling horizon forecasts. Two proactive indicators of future forecast errors are extracted from the forecast stream. This quantitative method re- quires no knowledge of the forecasting model itself and has shown promising results when applied to two datasets consisting of real forecast updates.
ContributorsLiu, Lei (Author) / Runger, George C. (Thesis advisor) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Janakiram, Mani (Committee member) / Arizona State University (Publisher)
Created2015
151757-Thumbnail Image.png
Description
Statistical process control (SPC) and predictive analytics have been used in industrial manufacturing and design, but up until now have not been applied to threshold data of vital sign monitoring in remote care settings. In this study of 20 elders with COPD and/or CHF, extended months of peak flow monitoring

Statistical process control (SPC) and predictive analytics have been used in industrial manufacturing and design, but up until now have not been applied to threshold data of vital sign monitoring in remote care settings. In this study of 20 elders with COPD and/or CHF, extended months of peak flow monitoring (FEV1) using telemedicine are examined to determine when an earlier or later clinical intervention may have been advised. This study demonstrated that SPC may bring less than a 2.0% increase in clinician workload while providing more robust statistically-derived thresholds than clinician-derived thresholds. Using a random K-fold model, FEV1 output was predictably validated to .80 Generalized R-square, demonstrating the adequate learning of a threshold classifier. Disease severity also impacted the model. Forecasting future FEV1 data points is possible with a complex ARIMA (45, 0, 49), but variation and sources of error require tight control. Validation was above average and encouraging for clinician acceptance. These statistical algorithms provide for the patient's own data to drive reduction in variability and, potentially increase clinician efficiency, improve patient outcome, and cost burden to the health care ecosystem.
ContributorsFralick, Celeste (Author) / Muthuswamy, Jitendran (Thesis advisor) / O'Shea, Terrance (Thesis advisor) / LaBelle, Jeffrey (Committee member) / Pizziconi, Vincent (Committee member) / Shea, Kimberly (Committee member) / Arizona State University (Publisher)
Created2013