Matching Items (8)
Filtering by

Clear all filters

152902-Thumbnail Image.png
Description
Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of

Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of experiments due to the stress factor-level combinations resulting from the increased number of factors. Chapter 2 provides an approach for designing ALT plans with multiple stresses utilizing Latin hypercube designs that reduces the simulation cost without loss of statistical efficiency. A comparison to full grid and large-sample approximation methods illustrates the approach computational cost gain and flexibility in determining optimal stress settings with less assumptions and more intuitive unit allocations.

Implicit in the design criteria of current ALT designs is the assumption that the form of the acceleration model is correct. This is unrealistic assumption in many real-world problems. Chapter 3 provides an approach for ALT optimum design for model discrimination. We utilize the Hellinger distance measure between predictive distributions. The optimal ALT plan at three stress levels was determined and its performance was compared to good compromise plan, best traditional plan and well-known 4:2:1 compromise test plans. In the case of linear versus quadratic ALT models, the proposed method increased the test plan's ability to distinguish among competing models and provided better guidance as to which model is appropriate for the experiment.

Chapter 4 extends the approach of Chapter 3 to ALT sequential model discrimination. An initial experiment is conducted to provide maximum possible information with respect to model discrimination. The follow-on experiment is planned by leveraging the most current information to allow for Bayesian model comparison through posterior model probability ratios. Results showed that performance of plan is adversely impacted by the amount of censoring in the data, in the case of linear vs. quadratic model form at three levels of constant stress, sequential testing can improve model recovery rate by approximately 8% when data is complete, but no apparent advantage in adopting sequential testing was found in the case of right-censored data when censoring is in excess of a certain amount.
ContributorsNasir, Ehab (Author) / Pan, Rong (Thesis advisor) / Runger, George C. (Committee member) / Gel, Esma (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2014
153053-Thumbnail Image.png
Description
No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation of main effects and a few two-factor interactions without the need for follow-up experimentation. Analysis methods for non-regular designs is an area of ongoing research, because standard variable selection techniques such as stepwise regression may not always be the best approach. The current work investigates the use of the Dantzig selector for analyzing no-confounding designs. Through a series of examples it shows that this technique is very effective for identifying the set of active factors in no-confounding designs when there are three of four active main effects and up to two active two-factor interactions.

To evaluate the performance of Dantzig selector, a simulation study was conducted and the results based on the percentage of type II errors are analyzed. Also, another alternative for 6 factor NC design, called the Alternate No-confounding design in six factors is introduced in this study. The performance of this Alternate NC design in 6 factors is then evaluated by using Dantzig selector as an analysis method. Lastly, a section is dedicated to comparing the performance of NC-6 and Alternate NC-6 designs.
ContributorsKrishnamoorthy, Archana (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2014
149829-Thumbnail Image.png
Description
Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research,

Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly.
ContributorsKhan, M Nadeem Shafi (Author) / Phelan, Patrick E (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Farin, Gerald (Committee member) / Roberts, Chell (Committee member) / Henderson, Mark (Committee member) / Arizona State University (Publisher)
Created2011
Description

Most asteroids originated in larger parent bodies that underwent accretion and heating during the first few million years of the solar system. We investigated the parent body of S-type asteroid 25143 Itokawa by developing a computational model which can approximate the thermal evolution of an early solar system body. We

Most asteroids originated in larger parent bodies that underwent accretion and heating during the first few million years of the solar system. We investigated the parent body of S-type asteroid 25143 Itokawa by developing a computational model which can approximate the thermal evolution of an early solar system body. We compared known constraints on Itokawa’s thermal history to simulations of its parent body and constrained its time of formation to between 1.6 and 2.5 million years after the beginning of the solar system, though certain details could allow for even earlier or later formation. These results stress the importance of precise data required of the material properties of asteroids and meteorites to place better constraints on the histories of their parent bodies. Additional mathematical and computational details are discussed, and the full code and data is made available online.

ContributorsHallstrom, Jonas (Author) / Bose, Maitrayee (Thesis director) / Beckstein, Oliver (Committee member) / Barrett, The Honors College (Contributor) / Department of Physics (Contributor) / Materials Science and Engineering Program (Contributor)
Created2023-05
Description

This creative project develops an environment in which three species inhabit a shared land and models the movement of the creatures to determine the survival rates over time in specific conditions. The three species modelled include a predator and a prey species with movement capabilities as well as a stagnant

This creative project develops an environment in which three species inhabit a shared land and models the movement of the creatures to determine the survival rates over time in specific conditions. The three species modelled include a predator and a prey species with movement capabilities as well as a stagnant fruit species. There are a variety of configurable variables that can be used to modify and control the simulation to observe how the resulting population charts change. The big difference between this project and a normal approach to simulating a predation relationship is that actual creatures themselves are being created and their movement is simulated in this virtual environment which then leads to population counts, rather than integrating differential equations relating the population sizes of both species and purely tracking the populations but not the creatures themselves. Because of this difference, my simulation is not meant to handle all the complexities of life that come in the real-world but instead is intended as a simplified approach to simulating creatures' lives with the purpose of conveying the idea of a real predation relationship. Thus, the main objective of my simulation is to produce data representative of real-world predator-prey relationships, with the overall cyclical pattern that is observed in natural achieved through simulating creature movement and life itself rather than estimating population size change.

ContributorsPerry, Jordan (Author) / Burger, Kevin (Thesis director) / Miller, Phillip (Committee member) / Barrett, The Honors College (Contributor) / Department of Physics (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and (iv) a combination of the latter two algorithms in a manner reflecting the different timescales on which these two processes occur (“freezing” the bonds in place for part of the simulation). All variations on Creutz’s algorithm were symmetrical in time, and thus reversible. The first three algorithms all favored low-energy states of the spin lattice and generated the Boltzmann energy distribution after reaching thermal equilibrium, as expected, while the last algorithm broke from the Boltzmann distribution while the bonds were “frozen.” The interpretation of this result as a net increase to the system’s total entropy is consistent with the second law of thermodynamics, which leads to the relationship between maximum entropy and the Boltzmann distribution.

ContributorsLewis, Aiden (Author) / Chamberlin, Ralph (Thesis director) / Beckstein, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2023-05
131561-Thumbnail Image.png
Description
In this project, we created a code that was able to simulate the dynamics of a three site Hubbard model ring connected to an infinite dissipative bath and driven by an electric field. We utilized the master equation approach, which will one day be able to be implemented efficiently on

In this project, we created a code that was able to simulate the dynamics of a three site Hubbard model ring connected to an infinite dissipative bath and driven by an electric field. We utilized the master equation approach, which will one day be able to be implemented efficiently on a quantum computer. For now we used classical computing to model one of the simplest nontrivial driven dissipative systems. This will serve as a verification of the master equation method and a baseline to test against when we are able to implement it on a quantum computer. For this report, we will mainly focus on classifying the DC component of the current around our ring. We notice several expected characteristics of this DC current including an inverse square tail at large values of the electric field and a linear response region at small values of the electric field.
ContributorsJohnson, Michael (Author) / Chamberlin, Ralph (Thesis director) / Ritchie, Barry (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165460-Thumbnail Image.png
Description

The goal of this project was to develop a prototype for an educational tool that will help users understand how the voting system deployed by a government can affect the outcomes of elections. This tool was developed in Java SE, consisting of a model for the simulation of elections capable

The goal of this project was to develop a prototype for an educational tool that will help users understand how the voting system deployed by a government can affect the outcomes of elections. This tool was developed in Java SE, consisting of a model for the simulation of elections capable of supporting various voting systems, along with a variety of fairness measures, and educational and explanatory material. While a completed version of this tool would ideally be fully self-contained, easily accessible in-browser, and provide detailed visualizations of the simulated elections, the current prototype version consists of a GitHub repository containing the code, with the educational material and explanations contained within the thesis paper. Ultimately, the goal of this project was to be a stepping stone on the path to create a tool that will instill a measure of systemic skepticism in the user; to give them cause to question why our systems are built the way they are, and reasons to believe that they could be changed for the better. In undertaking this project, I hope to help in providing people with the political education needed to make informed decisions about how they want the government to function. The GitHub repository containing all the code can be found at, https://github.com/SpencerDiamond/Votes_that_Count

ContributorsDiamond, Spencer (Author) / Sarjoughian, Hessam (Thesis director) / Hines, Taylor (Committee member) / Barrett, The Honors College (Contributor) / Department of Physics (Contributor) / Department of English (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05