Matching Items (8)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
152460-Thumbnail Image.png
Description
New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in

New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in the field. An on-going concern for engineers is the consequences of ionizing radiation exposure, specifically total dose effects. For many of the different applications, there is a likelihood of exposure to radiation, which can result in device degradation and potentially failure. While the total dose effects and the resulting degradation are a well-studied field and methodologies to help mitigate degradation have been developed, there is still a need for simulation techniques to help designers understand total dose effects within their design. To that end, the work presented here details simulation techniques to analyze as well as predict the total dose response of a circuit. In this dissertation the total dose effects are broken into two sub-categories, intra-device and inter-device effects in CMOS technology. Intra-device effects degrade the performance of both n-channel and p-channel transistors, while inter-device effects result in loss of device isolation. In this work, multiple case studies are presented for which total dose degradation is of concern. Through the simulation techniques, the individual device and circuit responses are modeled post-irradiation. The use of these simulation techniques by circuit designers allow predictive simulation of total dose effects, allowing focused design changes to be implemented to increase radiation tolerance of high reliability electronics.
ContributorsSchlenvogt, Garrett (Author) / Barnaby, Hugh (Thesis advisor) / Goodnick, Stephen (Committee member) / Vasileska, Dragica (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2014
152902-Thumbnail Image.png
Description
Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of

Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of experiments due to the stress factor-level combinations resulting from the increased number of factors. Chapter 2 provides an approach for designing ALT plans with multiple stresses utilizing Latin hypercube designs that reduces the simulation cost without loss of statistical efficiency. A comparison to full grid and large-sample approximation methods illustrates the approach computational cost gain and flexibility in determining optimal stress settings with less assumptions and more intuitive unit allocations.

Implicit in the design criteria of current ALT designs is the assumption that the form of the acceleration model is correct. This is unrealistic assumption in many real-world problems. Chapter 3 provides an approach for ALT optimum design for model discrimination. We utilize the Hellinger distance measure between predictive distributions. The optimal ALT plan at three stress levels was determined and its performance was compared to good compromise plan, best traditional plan and well-known 4:2:1 compromise test plans. In the case of linear versus quadratic ALT models, the proposed method increased the test plan's ability to distinguish among competing models and provided better guidance as to which model is appropriate for the experiment.

Chapter 4 extends the approach of Chapter 3 to ALT sequential model discrimination. An initial experiment is conducted to provide maximum possible information with respect to model discrimination. The follow-on experiment is planned by leveraging the most current information to allow for Bayesian model comparison through posterior model probability ratios. Results showed that performance of plan is adversely impacted by the amount of censoring in the data, in the case of linear vs. quadratic model form at three levels of constant stress, sequential testing can improve model recovery rate by approximately 8% when data is complete, but no apparent advantage in adopting sequential testing was found in the case of right-censored data when censoring is in excess of a certain amount.
ContributorsNasir, Ehab (Author) / Pan, Rong (Thesis advisor) / Runger, George C. (Committee member) / Gel, Esma (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2014
153053-Thumbnail Image.png
Description
No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation of main effects and a few two-factor interactions without the need for follow-up experimentation. Analysis methods for non-regular designs is an area of ongoing research, because standard variable selection techniques such as stepwise regression may not always be the best approach. The current work investigates the use of the Dantzig selector for analyzing no-confounding designs. Through a series of examples it shows that this technique is very effective for identifying the set of active factors in no-confounding designs when there are three of four active main effects and up to two active two-factor interactions.

To evaluate the performance of Dantzig selector, a simulation study was conducted and the results based on the percentage of type II errors are analyzed. Also, another alternative for 6 factor NC design, called the Alternate No-confounding design in six factors is introduced in this study. The performance of this Alternate NC design in 6 factors is then evaluated by using Dantzig selector as an analysis method. Lastly, a section is dedicated to comparing the performance of NC-6 and Alternate NC-6 designs.
ContributorsKrishnamoorthy, Archana (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2014
154064-Thumbnail Image.png
Description
Thermal effects in nano-scaled devices were reviewed and modeling methodologies to deal with this issue were discussed. The phonon energy balance equations model, being one of the important previous works regarding the modeling of heating effects in nano-scale devices, was derived. Then, detailed description was given on the Monte Carlo

Thermal effects in nano-scaled devices were reviewed and modeling methodologies to deal with this issue were discussed. The phonon energy balance equations model, being one of the important previous works regarding the modeling of heating effects in nano-scale devices, was derived. Then, detailed description was given on the Monte Carlo (MC) solution of the phonon Boltzmann Transport Equation. The phonon MC solver was developed next as part of this thesis. Simulation results of the thermal conductivity in bulk Si show good agreement with theoretical/experimental values from literature.
ContributorsYoo, Seung Kyung (Author) / Vasileska, Dragica (Thesis advisor) / Ferry, David (Committee member) / Goodnick, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
155942-Thumbnail Image.png
Description
In recent years, there has been increased interest in the Indium Gallium Nitride (InGaN) material system for photovoltaic (PV) applications. The InGaN alloy system has demonstrated high performance for high frequency power devices, as well as for optical light emitters. This material system is also promising for photovoltaic applications

In recent years, there has been increased interest in the Indium Gallium Nitride (InGaN) material system for photovoltaic (PV) applications. The InGaN alloy system has demonstrated high performance for high frequency power devices, as well as for optical light emitters. This material system is also promising for photovoltaic applications due to broad range of bandgaps of InxGa1-xN alloys from 0.65 eV (InN) to 3.42 eV (GaN), which covers most of the electromagnetic spectrum from ultraviolet to infrared wavelengths. InGaN’s high absorption coefficient, radiation resistance and thermal stability (operating with temperature > 450 ℃) makes it a suitable PV candidate for hybrid concentrating solar thermal systems as well as other high temperature applications. This work proposed a high efficiency InGaN-based 2J tandem cell for high temperature (450 ℃) and concentration (200 X) hybrid concentrated solar thermal (CSP) application via numerical simulation. In order to address the polarization and band-offset issues for GaN/InGaN hetero-solar cells, band-engineering techniques are adopted and a simple interlayer is proposed at the hetero-interface rather than an Indium composition grading layer which is not practical in fabrication. The base absorber thickness and doping has been optimized for 1J cell performance and current matching has been achieved for 2J tandem cell design. The simulations also suggest that the issue of crystalline quality (i.e. short SRH lifetime) of the nitride material system to date is a crucial factor limiting the performance of the designed 2J cell at high temperature. Three pathways to achieve ~25% efficiency have been proposed under 450 ℃ and 200 X. An anti-reflection coating (ARC) for the InGaN solar cell optical management has been designed. Finally, effective mobility model for quantum well solar cells has been developed for efficient quasi-bulk simulation.
ContributorsFang, Yi, Ph.D (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen (Thesis advisor) / Ponce, Fernando (Committee member) / Nemanich, Robert (Committee member) / Arizona State University (Publisher)
Created2017
157176-Thumbnail Image.png
Description
Gallium Nitride (GaN) based Current Aperture Vertical Electron Transistors (CAVETs) present many appealing qualities for applications in high power, high frequency devices. The wide bandgap, high carrier velocity of GaN make it ideal for withstanding high electric fields and supporting large currents. The vertical topology of the CAVET allows for

Gallium Nitride (GaN) based Current Aperture Vertical Electron Transistors (CAVETs) present many appealing qualities for applications in high power, high frequency devices. The wide bandgap, high carrier velocity of GaN make it ideal for withstanding high electric fields and supporting large currents. The vertical topology of the CAVET allows for more efficient die area utilization, breakdown scaling with the height of the device, and burying high electric fields in the bulk where they will not charge interface states that can lead to current collapse at higher frequency.

Though GaN CAVETs are promising new devices, they are expensive to develop due to new or exotic materials and processing steps. As a result, the accurate simulation of GaN CAVETs has become critical to the development of new devices. Using Silvaco Atlas 5.24.1.R, best practices were developed for GaN CAVET simulation by recreating the structure and results of the pGaN insulated gate CAVET presented in chapter 3 of [8].

From the results it was concluded that the best simulation setup for transfer characteristics, output characteristics, and breakdown included the following. For methods, the use of Gummel, Block, Newton, and Trap. For models, SRH, Fermi, Auger, and impact selb. For mobility, the use of GANSAT and manually specified saturation velocity and mobility (based on doping concentration). Additionally, parametric sweeps showed that, of those tested, critical CAVET parameters included channel mobility (and thus doping), channel thickness, Current Blocking Layer (CBL) doping, gate overlap, and aperture width in rectangular devices or diameter in cylindrical devices.
ContributorsWarren, Andrew (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen (Committee member) / Zhao, Yuji (Committee member) / Arizona State University (Publisher)
Created2019
154216-Thumbnail Image.png
Description
The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the

The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the variance source when compared to the ANOVA method. Although the variance estimators deteriorate when varying degrees of non-normality is introduced through simulation; however, the POV method is shown to be a more stable measure of variance in the aggregate. The POV method also provides non-negative, stable estimates for interaction when compared to the ANOVA method. The POV method is shown to be more stable, particularly in low sample size situations. Based on these findings, it is suggested that the POV is not a replacement for more complex analysis methods, but rather, a supplement to them. POV is ideal for preliminary analysis due to the ease of implementation, the simplicity of interpretation, and the lack of dependency on statistical analysis packages or statistical knowledge.
ContributorsLittle, David John (Author) / Borror, Connie (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2015