Matching Items (1,695)
Filtering by

Clear all filters

152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151329-Thumbnail Image.png
Description
During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set.

During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set. The 16 run regular fractional factorial designs for six, seven and eight factors are in common usage. These designs allow clear estimation of all main effects when the three-factor and higher order interactions are negligible, but all two-factor interactions are aliased with each other making estimation of these effects problematic without additional runs. Alternatively, certain nonregular designs called no-confounding (NC) designs by Jones and Montgomery (Jones & Montgomery, Alternatives to resolution IV screening designs in 16 runs, 2010) partially confound the main effects with the two-factor interactions but do not completely confound any two-factor interactions with each other. The NC designs are useful for independently estimating main effects and two-factor interactions without additional runs. While several methods have been suggested for the analysis of data from nonregular designs, stepwise regression is familiar to practitioners, available in commercial software, and is widely used in practice. Given that an NC design has been run, the performance of stepwise regression for model selection is unknown. In this dissertation I present a comprehensive simulation study evaluating stepwise regression for analyzing both regular fractional factorial and NC designs. Next, the projection properties of the six, seven and eight factor NC designs are studied. Studying the projection properties of these designs allows the development of analysis methods to analyze these designs. Lastly the designs and projection properties of 9 to 14 factor NC designs onto three and four factors are presented. Certain recommendations are made on analysis methods for these designs as well.
ContributorsShinde, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Fowler, John (Committee member) / Jones, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
153053-Thumbnail Image.png
Description
No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation of main effects and a few two-factor interactions without the need for follow-up experimentation. Analysis methods for non-regular designs is an area of ongoing research, because standard variable selection techniques such as stepwise regression may not always be the best approach. The current work investigates the use of the Dantzig selector for analyzing no-confounding designs. Through a series of examples it shows that this technique is very effective for identifying the set of active factors in no-confounding designs when there are three of four active main effects and up to two active two-factor interactions.

To evaluate the performance of Dantzig selector, a simulation study was conducted and the results based on the percentage of type II errors are analyzed. Also, another alternative for 6 factor NC design, called the Alternate No-confounding design in six factors is introduced in this study. The performance of this Alternate NC design in 6 factors is then evaluated by using Dantzig selector as an analysis method. Lastly, a section is dedicated to comparing the performance of NC-6 and Alternate NC-6 designs.
ContributorsKrishnamoorthy, Archana (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2014
153065-Thumbnail Image.png
Description
Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data

Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data imbalance and data noise have been treated separately in the data mining field. Yet, such approach ignores the mutual effects and as a result may lead to new problems. A desirable solution is to tackle these two issues jointly. Noting the complementary nature of generative and discriminative models, this research proposes a unified model fusion based framework to handle the imbalanced classification with noisy dataset.

The phase I study focuses on the imbalanced classification problem. A generative classifier, Gaussian Mixture Model (GMM) is studied which can learn the distribution of the imbalance data to improve the discrimination power on imbalanced classes. By fusing this knowledge into cost SVM (cSVM), a CSG method is proposed. Experimental results show the effectiveness of CSG in dealing with imbalanced classification problems.

The phase II study expands the research scope to include the noisy dataset into the imbalanced classification problem. A model fusion based framework, K Nearest Gaussian (KNG) is proposed. KNG employs a generative modeling method, GMM, to model the training data as Gaussian mixtures and form adjustable confidence regions which are less sensitive to data imbalance and noise. Motivated by the K-nearest neighbor algorithm, the neighboring Gaussians are used to classify the testing instances. Experimental results show KNG method greatly outperforms traditional classification methods in dealing with imbalanced classification problems with noisy dataset.

The phase III study addresses the issues of feature selection and parameter tuning of KNG algorithm. To further improve the performance of KNG algorithm, a Particle Swarm Optimization based method (PSO-KNG) is proposed. PSO-KNG formulates model parameters and data features into the same particle vector and thus can search the best feature and parameter combination jointly. The experimental results show that PSO can greatly improve the performance of KNG with better accuracy and much lower computational cost.
ContributorsHe, Miao (Author) / Wu, Teresa (Thesis advisor) / Li, Jing (Committee member) / Silva, Alvin (Committee member) / Borror, Connie (Committee member) / Arizona State University (Publisher)
Created2014
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151203-Thumbnail Image.png
Description
This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.
ContributorsAbelson, Richard (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Shunk, Dan (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
133352-Thumbnail Image.png
Description
The inherent risk in testing drugs has been hotly debated since the government first started regulating the drug industry in the early 1900s. Who can assume the risks associated with trying new pharmaceuticals is unclear when looked at through society's lens. In the mid twentieth century, the US Food and

The inherent risk in testing drugs has been hotly debated since the government first started regulating the drug industry in the early 1900s. Who can assume the risks associated with trying new pharmaceuticals is unclear when looked at through society's lens. In the mid twentieth century, the US Food and Drug Administration (FDA) published several guidance documents encouraging researchers to exclude women from early clinical drug research. The motivation to publish those documents and the subsequent guidance documents in which the FDA and other regulatory offices established their standpoints on women in drug research may have been connected to current events at the time. The problem of whether women should be involved in drug research is a question of who can assume risk and who is responsible for disseminating what specific kinds of information. The problem tends to be framed as one that juxtaposes the health of women and fetuses and sets their health as in opposition. That opposition, coupled with the inherent uncertainty in testing drugs, provides for a complex set of issues surrounding consent and access to information.
ContributorsMeek, Caroline Jane (Author) / Maienschein, Jane (Thesis director) / Brian, Jennifer (Committee member) / School of Life Sciences (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131501-Thumbnail Image.png
Description
About one in ten refugees from the American Revolution was African-descended, and unlike many white Loyalists fleeing war in the thirteen mainland North American colonies, black Loyalists were people without a country. Most were fleeing slavery in Virginia or the Carolinas, yet not fully able to claim to be British

About one in ten refugees from the American Revolution was African-descended, and unlike many white Loyalists fleeing war in the thirteen mainland North American colonies, black Loyalists were people without a country. Most were fleeing slavery in Virginia or the Carolinas, yet not fully able to claim to be British subjects, despite many heeding the call to join British forces. Among the 40,000 Loyalists who departed, around 3,500 black Loyalists evacuated from the newly founded United States between the years of 1776 and 1785. I hope to evaluate the movement patterns and thought process behind this particular group with what choices they ultimately had after the war using Dunmore’s Proclamation as a means to freedom. These black Loyalists faced the difficult decision in choosing what identity they would side with once they left. These former slaves ultimately had to choose between becoming forced migrants with the losing side of the war or staying with the winning side of the war as people bound by chains. Although there were a multitude of fascinating tales that could be told through the lens of these black Loyalists, one particular family caught my eye within my research. This story is the journey of the Fortune family who chose to run away from American slavery to migrate to Nova Scotia. Their story will grant me access to analyze the extreme discrimination families met as they fled, the contempt the new colonies felt against them, as well as the evolution of their societal roles as some of these immigrants integrated into their new country and became accepted as respected individuals. Furthermore, their tale aided me in understanding what caused some emigrant black Loyalists to stay in Nova Scotia despite the hardships they faced as outsiders who were unwelcome from the perspective of native white Nova Scotians.
ContributorsNanez-Krause, Michael L (Author) / Schermerhorn, Calvin J. (Thesis director) / Barnes, Andrew (Committee member) / Historical, Philosophical & Religious Studies (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131502-Thumbnail Image.png
Description
Social-emotional learning (SEL) methods are beginning to receive global attention in primary school education, yet the dominant emphasis on implementing these curricula is in high-income, urbanized areas. Consequently, the unique features of developing and integrating such methods in middle- or low-income rural areas are unclear. Past studies suggest that students

Social-emotional learning (SEL) methods are beginning to receive global attention in primary school education, yet the dominant emphasis on implementing these curricula is in high-income, urbanized areas. Consequently, the unique features of developing and integrating such methods in middle- or low-income rural areas are unclear. Past studies suggest that students exposed to SEL programs show an increase in academic performance, improved ability to cope with stress, and better attitudes about themselves, others, and school, but these curricula are designed with an urban focus. The purpose of this study was to conduct a needs-based analysis to investigate components specific to a SEL curriculum contextualized to rural primary schools. A promising organization committed to rural educational development is Barefoot College, located in Tilonia, Rajasthan, India. In partnership with Barefoot, we designed an ethnographic study to identify and describe what teachers and school leaders consider the highest needs related to their students' social and emotional education. To do so, we interviewed 14 teachers and school leaders individually or in a focus group to explore their present understanding of “social-emotional learning” and the perception of their students’ social and emotional intelligence. Analysis of this data uncovered common themes among classroom behaviors and prevalent opportunities to address social and emotional well-being among students. These themes translated into the three overarching topics and eight sub-topics explored throughout the curriculum, and these opportunities guided the creation of the 21 modules within it. Through a design-based research methodology, we developed a 40-hour curriculum by implementing its various modules within seven Barefoot classrooms alongside continuous reiteration based on teacher feedback and participant observation. Through this process, we found that student engagement increased during contextualized SEL lessons as opposed to traditional methods. In addition, we found that teachers and students preferred and performed better with an activities-based approach. These findings suggest that rural educators must employ particular teaching strategies when addressing SEL, including localized content and an experiential-learning approach. Teachers reported that as their approach to SEL shifted, they began to unlock the potential to build self-aware, globally-minded students. This study concludes that social and emotional education cannot be treated in a generalized manner, as curriculum development is central to the teaching-learning process.
ContributorsBucker, Delaney Sue (Author) / Carrese, Susan (Thesis director) / Barab, Sasha (Committee member) / School of Life Sciences (Contributor, Contributor) / School of Civic & Economic Thought and Leadership (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05