This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 245
Filtering by

Clear all filters

152019-Thumbnail Image.png
Description
In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the

In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the nonlinear and anharmonic regime in the normal phase of strange quark matter. We point out several qualitative effects due to the anharmonicity, although quantitatively they appear to be relatively small. In the corresponding study, we take into account the interplay between the non- leptonic and semileptonic weak processes. The results can be important in order to relate accessible observables of compact stars to their internal composition. We also use quantum field theoretical methods to study the transport properties in monolayer graphene in a strong magnetic field. The corresponding quasi-relativistic system re- veals an anomalous quantum Hall effect, whose features are directly connected with the spontaneous flavor symmetry breaking. We study the microscopic origin of Fara- day rotation and magneto-optical transmission in graphene and show that their main features are in agreement with the experimental data.
ContributorsWang, Xinyang, Ph.D (Author) / Shovkovy, Igor (Thesis advisor) / Belitsky, Andrei (Committee member) / Easson, Damien (Committee member) / Peng, Xihong (Committee member) / Vachaspati, Tanmay (Committee member) / Arizona State University (Publisher)
Created2013
151710-Thumbnail Image.png
Description
In this thesis I model the thermal and structural evolution of Kuiper Belt Objects (KBOs) and explore their ability to retain undifferentiated crusts of rock and ice over geologic timescales. Previous calculations by Desch et al. (2009) predicted that initially homogenous KBOs comparable in size to Charon (R ~ 600

In this thesis I model the thermal and structural evolution of Kuiper Belt Objects (KBOs) and explore their ability to retain undifferentiated crusts of rock and ice over geologic timescales. Previous calculations by Desch et al. (2009) predicted that initially homogenous KBOs comparable in size to Charon (R ~ 600 km) have surfaces too cold to permit the separation of rock and ice, and should always retain thick (~ 85 km) crusts, despite the partial differentiation of rock and ice inside the body. The retention of a thermally insulating, undifferentiated crust is favorable to the maintenance of subsurface liquid and potentially cryovolcanism on the KBO surface. A potential objection to these models is that the dense crust of rock and ice overlying an ice mantle represents a gravitationally unstable configuration that should overturn by Rayleigh-Taylor (RT) instabilities. I have calculated the growth rate of RT instabilities at the ice-crust interface, including the effect of rock on the viscosity. I have identified a critical ice viscosity for the instability to grow significantly over the age of the solar system. I have calculated the viscosity as a function of temperature for conditions relevant to marginal instability. I find that RT instabilities on a Charon-sized KBO require temperatures T > 143 K. Including this effect in thermal evolution models of KBOs, I find that the undifferentiated crust on KBOs is thinner than previously calculated, only ~ 50 km. While thinner, this crustal thickness is still significant, representing ~ 25% of the KBO mass, and helps to maintain subsurface liquid throughout most of the KBO's history.
ContributorsRubin, Mark (Author) / Desch, Steven J (Thesis advisor) / Sharp, Thomas (Committee member) / Christensen, Philip R. (Philip Russel) (Committee member) / Arizona State University (Publisher)
Created2013
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152229-Thumbnail Image.png
Description
A significant portion of stars occur as binary systems, in which two stellar components orbit a common center of mass. As the number of known exoplanet systems continues to grow, some binary systems are now known to harbor planets around one or both stellar components. As a first look into

A significant portion of stars occur as binary systems, in which two stellar components orbit a common center of mass. As the number of known exoplanet systems continues to grow, some binary systems are now known to harbor planets around one or both stellar components. As a first look into composition of these planetary systems, I investigate the chemical compositions of 4 binary star systems, each of which is known to contain at least one planet. Stars are known to vary significantly in their composition, and their overall metallicity (represented by iron abundance, [Fe/H]) has been shown to correlate with the likelihood of hosting a planetary system. Furthermore, the detailed chemical composition of a system can give insight into the possible properties of the system's known exoplanets. Using high-resolution spectra, I quantify the abundances of up to 28 elements in each stellar component of the binary systems 16 Cyg, 83 Leo, HD 109749, and HD 195019. A direct comparison is made between each star and its binary companion to give a differential composition for each system. For each star, a comparison of elemental abundance vs. condensation temperature is made, which may be a good diagnostic of refractory-rich terrestrial planets in a system. The elemental ratios C/O and Mg/Si, crucial in determining the atmospheric composition and mineralogy of planets, are calculated and discussed for each star. Finally, the compositions and diagnostics of each binary system are discussed in terms of the known planetary and stellar parameters for each system.
ContributorsCarande, Bryce (Author) / Young, Patrick (Thesis advisor) / Patience, Jennifer L (Thesis advisor) / Anbar, Ariel D (Committee member) / Arizona State University (Publisher)
Created2013
152189-Thumbnail Image.png
Description
This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’

This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’ test scores as outcome variables and teachers’ contributions as random effects to ascribe changes in student performance to the teachers who have taught them. The VAMs teacher score is the empirical best linear unbiased predictor (EBLUP). This approach is limited by the adequacy of the assumed model specification with respect to the unknown underlying model. In that regard, this study proposes alternative ways to rank teacher effects that are not dependent on a given model by introducing two variable importance measures (VIMs), the node-proportion and the covariate-proportion. These VIMs are novel because they take into account the final configuration of the terminal nodes in the constitutive trees in a random forest. In a simulation study, under a variety of conditions, true rankings of teacher effects are compared with estimated rankings obtained using three sources: the newly proposed VIMs, existing VIMs, and EBLUPs from the assumed linear model specification. The newly proposed VIMs outperform all others in various scenarios where the model was misspecified. The second study develops two novel interaction measures. These measures could be used within but are not restricted to the VAM framework. The distribution-based measure is constructed to identify interactions in a general setting where a model specification is not assumed in advance. In turn, the mean-based measure is built to estimate interactions when the model specification is assumed to be linear. Both measures are unique in their construction; they take into account not only the outcome values, but also the internal structure of the trees in a random forest. In a separate simulation study, under a variety of conditions, the proposed measures are found to identify and estimate second-order interactions.
ContributorsValdivia, Arturo (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
152054-Thumbnail Image.png
Description
Solar system orbital dynamics can offer unique challenges. Impacts of interplanetary dust particles can significantly alter the surfaces of icy satellites and minor planets. Impact heating from these particles can anneal away radiation damage to the crystalline structure of surface water ice. This effect is enhanced by gravitational focusing for

Solar system orbital dynamics can offer unique challenges. Impacts of interplanetary dust particles can significantly alter the surfaces of icy satellites and minor planets. Impact heating from these particles can anneal away radiation damage to the crystalline structure of surface water ice. This effect is enhanced by gravitational focusing for giant planet satellites. In addition, impacts of interplanetary dust particles on the small satellites of the Pluto system can eject into the system significant amounts of secondary intra-satellite dust. This dust is primarily swept up by Pluto and Charon, and could explain the observed albedo features on Pluto's surface. In addition to Pluto, a large fraction of trans-neptunian objects (TNOs) are binary or multiple systems. The mutual orbits of these TNO binaries can range from very wide (periods of several years) to near-contact systems (less than a day period). No single formation mechanism can explain this distribution. However, if the systems generally formed wide, a combination of solar and body tides (commonly called Kozai Cycles-Tidal Friction, KCTF) can cause most systems to tighten sufficiently to explain the observed distributions. This KCTF process can also be used to describe the orbital evolution of a terrestrial-class exoplanet after being captured as a satellite of a habitable-zone giant exoplanet. The resulting exomoon would be both potentially habitable and potenially detectable in the full Kepler data set.
ContributorsPorter, Simon Bernard (Author) / Desch, Steven (Thesis advisor) / Zolotov, Mikhail (Committee member) / Timmes, Francis (Committee member) / Scannapieco, Evan (Committee member) / Robinson, Mark (Committee member) / Arizona State University (Publisher)
Created2013
151992-Thumbnail Image.png
Description
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools;

Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
ContributorsReichenberg, Ray E (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Arizona State University (Publisher)
Created2013
151756-Thumbnail Image.png
Description
Galaxies represent a fundamental catalyst in the ``lifecycle'' of matter in the Universe, and the study of galaxy assembly and evolution provides unique insight into the physical processes governing the transformation of matter from atoms to gas to stars. With the Hubble Space Telescope, the astrophysical community is able to

Galaxies represent a fundamental catalyst in the ``lifecycle'' of matter in the Universe, and the study of galaxy assembly and evolution provides unique insight into the physical processes governing the transformation of matter from atoms to gas to stars. With the Hubble Space Telescope, the astrophysical community is able to study the formation and evolution of galaxies, at an unrivaled spatial resolution, over more than 90% of cosmic time. Here, I present results from two complementary studies of galaxy evolution in the local and intermediate redshift Universe which used new and archival HST images. First, I use archival broad-band HST WFPC2 optical images of local (d<63 Mpc) Seyfert-type galaxies to test the observed correlation between visually-classified host galaxy dust morphology and AGN class. Using quantitative parameters for classifying galaxy morphology, I do not measure a strong correlation between the galaxy morphology and AGN class. This result could imply that the Unified Model of AGN provides a sufficient model for the observed diversity of AGN, but this result could also indicate the quantitative techniques are insufficient for characterizing the dust morphology of local galaxies. To address the latter, I develop a new automated method using an inverse unsharp masking technique coupled to Source Extractor to detect and measure dust morphology. I measure no strong trends with dust-morphology and AGN class using this method, and conclude that the Unified Model remains sufficient to explain the diversity of AGN. Second, I use new UV-optical-near IR broad-band images obtained with the HST WFC3 in the Early Release Science (ERS) program to study the evolution of massive, early-type galaxies. These galaxies were once considered to be ``red and dead'', as a class uniformly devoid of recent star formation, but observations of these galaxies in the local Universe at UV wavelengths have revealed a significant fraction (30%) of ETGs to have recently formed a small fraction (5-10%) of their stellar mass in young stars. I extend the study of recent star formation in ETGs to intermediate-redshift 0.35<1.5 with the ERS data. Comparing the mass fraction and age of young stellar populations identified in these ETGs from two-component SED analysis with the morphology of the ETG and the frequency of companions, I find that at this redshift many ETGs are likely to have experienced a minor burst of recent star formation. The mechanisms driving this recent star formation are varied, and evidence for both minor merger driven recent star formation as well as the evolution of transitioning ETGs is identified.
ContributorsRutkowski, Michael (Author) / Windhorst, Rogier A. (Thesis advisor) / Bowman, Judd (Committee member) / Butler, Nathaniel (Committee member) / Desch, Steven (Committee member) / Young, Patrick (Committee member) / Arizona State University (Publisher)
Created2013