This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 651
Filtering by

Clear all filters

152030-Thumbnail Image.png
Description
Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the

Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the interface that result in high interfacial strength. First, molecular dynamics (MD) simulations are performed to calculate the adhesive energy between bare carbon and ZnO. Since the carbon fiber surface has oxygen functional groups, these were modeled and MD simulations showed the preference of ketones to strongly interact with ZnO, however, this was not observed in the case of hydroxyls and carboxylic acid. It was also found that the ketone molecules ability to change orientation facilitated the interactions with the ZnO surface. Experimentally, the atomic force microscope (AFM) was used to measure the adhesive energy between ZnO and carbon through a liftoff test by employing highly oriented pyrolytic graphite (HOPG) substrate and a ZnO covered AFM tip. Oxygen functionalization of the HOPG surface shows the increase of adhesive energy. Additionally, the surface of ZnO was modified to hold a negative charge, which demonstrated an increase in the adhesive energy. This increase in adhesion resulted from increased induction forces given the relatively high polarizability of HOPG and the preservation of the charge on ZnO surface. It was found that the additional negative charge can be preserved on the ZnO surface because there is an energy barrier since carbon and ZnO form a Schottky contact. Other materials with the same ionic properties of ZnO but with higher polarizability also demonstrated good adhesion to carbon. This result substantiates that their induced interaction can be facilitated not only by the polarizability of carbon but by any of the materials at the interface. The versatility to modify the magnitude of the induced interaction between carbon and an ionic material provides a new route to create interfaces with controlled interfacial strength.
ContributorsGalan Vera, Magdian Ulises (Author) / Sodano, Henry A (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Oswald, Jay (Committee member) / Speyer, Gil (Committee member) / Arizona State University (Publisher)
Created2013
151874-Thumbnail Image.png
Description
Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and

Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and a novel wind farm control approach has been modeled. The possibility of using lidar measurements to more fully characterize the wind field is discussed, specifically, terrain effects, spatial variation of winds, power density, and the effect of shear at different layers within the rotor swept area. Various vector retrieval methods have been applied to the lidar data, and results are presented on an elevated terrain-following surface at hub height. The vector retrieval estimates are compared with tower measurements, after interpolation to the appropriate level. CDL data is used to estimate the spatial power density at hub height. Since CDL can measure winds at different vertical levels, an approach for estimating wind power density over the wind turbine rotor-swept area is explored. Sample optimized layouts of wind farm using lidar data and global optimization algorithms, accounting for wake interaction effects, have been explored. An approach to evaluate spatial wind speed and direction estimates from a standard nested Coupled Ocean and Atmosphere Mesoscale Prediction System (COAMPS) model and CDL is presented. The magnitude of spatial difference between observations and simulation for wind energy assessment is researched. Diurnal effects and ramp events as estimated by CDL and COAMPS were inter-compared. Novel wind farm control based on incoming winds and direction input from CDL's is developed. Both yaw and pitch control using scanning CDL for efficient wind farm control is analyzed. The wind farm control optimizes power production and reduces loads on wind turbines for various lidar wind speed and direction inputs, accounting for wind farm wake losses and wind speed evolution. Several wind farm control configurations were developed, for enhanced integrability into the electrical grid. Finally, the value proposition of CDL for a wind farm development, based on uncertainty reduction and return of investment is analyzed.
ContributorsKrishnamurthy, Raghavendra (Author) / Calhoun, Ronald J (Thesis advisor) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Fraser, Matthew (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152181-Thumbnail Image.png
Description
The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the

The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the spectral characteristics such as natural frequencies and amplitudes. Statistical pattern recognition tools such as Hilbert Huang, Fisher's Discriminant, and Neural Network were used to identify and classify the unknown samples whether they are defective or not. In this work, a Finite Element Analysis software packages (ANSYS 13.0 and NASTRAN NX8.0) was used to obtain estimates of resonance frequencies in `good' and `bad' samples. Furthermore, a system identification approach was used to generate Auto-Regressive-Moving Average with exogenous component, Box-Jenkins, and Output Error models from experimental data that can be used for classification
ContributorsJameel, Osama (Author) / Redkar, Sangram (Thesis advisor) / Arizona State University (Publisher)
Created2013
152189-Thumbnail Image.png
Description
This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’

This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’ test scores as outcome variables and teachers’ contributions as random effects to ascribe changes in student performance to the teachers who have taught them. The VAMs teacher score is the empirical best linear unbiased predictor (EBLUP). This approach is limited by the adequacy of the assumed model specification with respect to the unknown underlying model. In that regard, this study proposes alternative ways to rank teacher effects that are not dependent on a given model by introducing two variable importance measures (VIMs), the node-proportion and the covariate-proportion. These VIMs are novel because they take into account the final configuration of the terminal nodes in the constitutive trees in a random forest. In a simulation study, under a variety of conditions, true rankings of teacher effects are compared with estimated rankings obtained using three sources: the newly proposed VIMs, existing VIMs, and EBLUPs from the assumed linear model specification. The newly proposed VIMs outperform all others in various scenarios where the model was misspecified. The second study develops two novel interaction measures. These measures could be used within but are not restricted to the VAM framework. The distribution-based measure is constructed to identify interactions in a general setting where a model specification is not assumed in advance. In turn, the mean-based measure is built to estimate interactions when the model specification is assumed to be linear. Both measures are unique in their construction; they take into account not only the outcome values, but also the internal structure of the trees in a random forest. In a separate simulation study, under a variety of conditions, the proposed measures are found to identify and estimate second-order interactions.
ContributorsValdivia, Arturo (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
152246-Thumbnail Image.png
Description
Smoke entering a flight deck cabin has been an issue for commercial aircraft for many years. The issue for a flight crew is how to mitigate the smoke so that they can safely fly the aircraft. For this thesis, the feasibility of having a Negative Pressure System that utilizes the

Smoke entering a flight deck cabin has been an issue for commercial aircraft for many years. The issue for a flight crew is how to mitigate the smoke so that they can safely fly the aircraft. For this thesis, the feasibility of having a Negative Pressure System that utilizes the cabin altitude pressure and outside altitude pressure to remove smoke from a flight deck was studied. Existing procedures for flight crews call for a descent down to a safe level for depressurizing the aircraft before taking further action. This process takes crucial time that is critical to the flight crew's ability to keep aware of the situation. This process involves a flight crews coordination and fast thinking to manually take control of the aircraft; which has become increasing more difficult due to the advancements in aircraft automation. Unfortunately this is the only accepted procedure that is used by a flight crew. Other products merely displace the smoke. This displacement is after the time it takes for the flight crew to set up the smoke displacement unit with no guarantee that a flight crew will be able to see or use all of the aircraft's controls. The Negative Pressure System will work automatically and not only use similar components already found on the aircraft, but work in conjunction with the smoke detection system and pressurization system so smoke removal can begin without having to descend down to a lower altitude. In order for this system to work correctly many factors must be taken into consideration. The size of a flight deck varies from aircraft to aircraft, therefore the ability for the system to efficiently remove smoke from an aircraft is taken into consideration. For the system to be feasible on an aircraft the cost and weight must be taken into consideration as the added fuel consumption due to weight of the system may be the limiting factor for installing such a system on commercial aircraft.
ContributorsDavies, Russell (Author) / Rogers, Bradley (Thesis advisor) / Palmgren, Dale (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2013
152254-Thumbnail Image.png
Description
The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.
ContributorsKexiang (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2013