Matching Items (217)
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151329-Thumbnail Image.png
Description
During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set.

During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set. The 16 run regular fractional factorial designs for six, seven and eight factors are in common usage. These designs allow clear estimation of all main effects when the three-factor and higher order interactions are negligible, but all two-factor interactions are aliased with each other making estimation of these effects problematic without additional runs. Alternatively, certain nonregular designs called no-confounding (NC) designs by Jones and Montgomery (Jones & Montgomery, Alternatives to resolution IV screening designs in 16 runs, 2010) partially confound the main effects with the two-factor interactions but do not completely confound any two-factor interactions with each other. The NC designs are useful for independently estimating main effects and two-factor interactions without additional runs. While several methods have been suggested for the analysis of data from nonregular designs, stepwise regression is familiar to practitioners, available in commercial software, and is widely used in practice. Given that an NC design has been run, the performance of stepwise regression for model selection is unknown. In this dissertation I present a comprehensive simulation study evaluating stepwise regression for analyzing both regular fractional factorial and NC designs. Next, the projection properties of the six, seven and eight factor NC designs are studied. Studying the projection properties of these designs allows the development of analysis methods to analyze these designs. Lastly the designs and projection properties of 9 to 14 factor NC designs onto three and four factors are presented. Certain recommendations are made on analysis methods for these designs as well.
ContributorsShinde, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Fowler, John (Committee member) / Jones, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151203-Thumbnail Image.png
Description
This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.
ContributorsAbelson, Richard (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Shunk, Dan (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
137647-Thumbnail Image.png
Description
The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to

The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to long-term contracts before other teams can bid for their services in free agency. If small and mid-market clubs can successfully identify talented players early, clubs can save money, achieve cost certainty and remain competitive for longer periods of time. These deals are also advantageous to players since they receive job security and greater financial dividends earlier in their career. The objective of this paper is to develop a regression-based predictive model that teams can use to forecast the performance of young baseball players with limited Major League experience. There were several tasks conducted to achieve this goal: (1) Data was obtained from Major League Baseball and Lahman's Baseball Database and sorted using Excel macros for easier analysis. (2) Players were separated into three positional groups depending on similar fielding requirements and offensive profiles: Group I was comprised of first and third basemen, Group II contains second basemen, shortstops, and center fielders and Group III contains left and right fielders. (3) Based on the context of baseball and the nature of offensive performance metrics, only players who achieve greater than 200 plate appearances within the first two years of their major league debut are included in this analysis. (4) The statistical software package JMP was used to create regression models of each group and analyze the residuals for any irregularities or normality violations. Once the models were developed, slight adjustments were made to improve the accuracy of the forecasts and identify opportunities for future work. It was discovered that Group I and Group III were the easiest player groupings to forecast while Group II required several attempts to improve the model.
ContributorsJack, Nathan Scott (Author) / Shunk, Dan (Thesis director) / Montgomery, Douglas (Committee member) / Borror, Connie (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05
141434-Thumbnail Image.png
Description

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed to link temperature with mortality and morbidity events in Maricopa County, Arizona, USA, with a focus on the summer season.

Methods: Using Poisson regression models that controlled for temporal confounders, we assessed daily temperature–health associations for a suite of mortality and morbidity events, diagnoses, and temperature metrics. Minimum risk temperatures, increasing risk temperatures, and excess risk temperatures were statistically identified to represent different “trigger points” at which heat-health intervention measures might be activated.

Results: We found significant and consistent associations of high environmental temperature with all-cause mortality, cardiovascular mortality, heat-related mortality, and mortality resulting from conditions that are consequences of heat and dehydration. Hospitalizations and emergency department visits due to heat-related conditions and conditions associated with consequences of heat and dehydration were also strongly associated with high temperatures, and there were several times more of those events than there were deaths. For each temperature metric, we observed large contrasts in trigger points (up to 22°C) across multiple health events and diagnoses.

Conclusion: Consideration of multiple health events and diagnoses together with a comprehensive approach to identifying threshold temperatures revealed large differences in trigger points for possible interventions related to heat. Providing an array of heat trigger points applicable for different end-users may improve the public health response to a problem that is projected to worsen in the coming decades.

ContributorsPettiti, Diana B. (Author) / Hondula, David M. (Author) / Yang, Shuo (Author) / Harlan, Sharon L. (Author) / Chowell, Gerardo (Author)
Created2016-02-01
141438-Thumbnail Image.png
Description

Maricopa County, Arizona, anchor to the fastest growing megapolitan area in the United States, is located in a hot desert climate where extreme temperatures are associated with elevated risk of mortality. Continued urbanization in the region will impact atmospheric temperatures and, as a result, potentially affect human health. We aimed

Maricopa County, Arizona, anchor to the fastest growing megapolitan area in the United States, is located in a hot desert climate where extreme temperatures are associated with elevated risk of mortality. Continued urbanization in the region will impact atmospheric temperatures and, as a result, potentially affect human health. We aimed to quantify the number of excess deaths attributable to heat in Maricopa County based on three future urbanization and adaptation scenarios and multiple exposure variables.

Two scenarios (low and high growth projections) represent the maximum possible uncertainty range associated with urbanization in central Arizona, and a third represents the adaptation of high-albedo cool roof technology. Using a Poisson regression model, we related temperature to mortality using data spanning 1983–2007. Regional climate model simulations based on 2050-projected urbanization scenarios for Maricopa County generated distributions of temperature change, and from these predicted changes future excess heat-related mortality was estimated. Subject to urbanization scenario and exposure variable utilized, projections of heat-related mortality ranged from a decrease of 46 deaths per year (− 95%) to an increase of 339 deaths per year (+ 359%).

Projections based on minimum temperature showed the greatest increase for all expansion and adaptation scenarios and were substantially higher than those for daily mean temperature. Projections based on maximum temperature were largely associated with declining mortality. Low-growth and adaptation scenarios led to the smallest increase in predicted heat-related mortality based on mean temperature projections. Use of only one exposure variable to project future heat-related deaths may therefore be misrepresentative in terms of direction of change and magnitude of effects. Because urbanization-induced impacts can vary across the diurnal cycle, projections of heat-related health outcomes that do not consider place-based, time-varying urban heat island effects are neglecting essential elements for policy relevant decision-making.

ContributorsHondula, David M. (Author) / Georgescu, Matei (Author) / Balling, Jr., Robert C. (Author)
Created2014-04-28
141447-Thumbnail Image.png
Description

Preventing heat-associated morbidity and mortality is a public health priority in Maricopa County, Arizona (United States). The objective of this project was to evaluate Maricopa County cooling centers and gain insight into their capacity to provide relief for the public during extreme heat events. During the summer of 2014, 53

Preventing heat-associated morbidity and mortality is a public health priority in Maricopa County, Arizona (United States). The objective of this project was to evaluate Maricopa County cooling centers and gain insight into their capacity to provide relief for the public during extreme heat events. During the summer of 2014, 53 cooling centers were evaluated to assess facility and visitor characteristics. Maricopa County staff collected data by directly observing daily operations and by surveying managers and visitors. The cooling centers in Maricopa County were often housed within community, senior, or religious centers, which offered various services for at least 1500 individuals daily. Many visitors were unemployed and/or homeless. Many learned about a cooling center by word of mouth or by having seen the cooling center’s location. The cooling centers provide a valuable service and reach some of the region’s most vulnerable populations. This project is among the first to systematically evaluate cooling centers from a public health perspective and provides helpful insight to community leaders who are implementing or improving their own network of cooling centers.

ContributorsBerisha, Vjollca (Author) / Hondula, David M. (Author) / Roach, Matthew (Author) / White, Jessica R. (Author) / McKinney, Benita (Author) / Bentz, Darcie (Author) / Mohamed, Ahmed (Author) / Uebelherr, Joshua (Author) / Goodin, Kate (Author)
Created2016-09-23
141473-Thumbnail Image.png
Description

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training and the other directional dot-motion training, compared to an active control group trained on Sudoku. The three training paradigms were compared on their effectiveness for altering CFFT. Directional dot-motion and contrast sensitivity training resulted in significant improvement in CFFT, while the Sudoku group did not yield significant improvement. This finding indicates that dot-motion and contrast sensitivity training similarly transfer to effect changes in CFFT. The results, combined with prior research linking CFFT to high-order cognitive processes such as reading ability, and studies showing positive impact of both dot-motion and contrast sensitivity training in reading, provide a possible mechanistic link of how these different training approaches impact reading abilities.

ContributorsZhou, Tianyou (Author) / Nanez, Jose (Author) / Zimmerman, Daniel (Author) / Holloway, Steven (Author) / Seitz, Aaron (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-10-26