This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 463
Filtering by

Clear all filters

151889-Thumbnail Image.png
Description
This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for

This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for many contaminated groundwater sites. However, site-specific variability necessitates the performance of small-scale treatability studies prior to full-scale implementation. The most common methodology is the batch microcosm, whose potential limitations and suitable technical alternatives are explored in this thesis. In a critical literature review, I discuss how continuous-flow conditions stimulate microbial attachment and biofilm formation, and identify unique microbiological phenomena largely absent in batch bottles, yet potentially relevant to contaminant fate. Following up on this theoretical evaluation, I experimentally produce pyrosequencing data and perform beta diversity analysis to demonstrate that batch and continuous-flow (column) microcosms foster distinctly different microbial communities. Next, I introduce the In Situ Microcosm Array (ISMA), which took approximately two years to design, develop, build and iteratively improve. The ISMA can be deployed down-hole in groundwater monitoring wells of contaminated aquifers for the purpose of autonomously conducting multiple parallel continuous-flow treatability experiments. The ISMA stores all sample generated in the course of each experiment, thereby preventing the release of chemicals into the environment. Detailed results are presented from an ISMA demonstration evaluating ISB for the treatment of hexavalent chromium and trichloroethene. In a technical and economical comparison to batch microcosms, I demonstrate the ISMA is both effective in informing remedial design decisions and cost-competitive. Finally, I report on a participatory technology assessment (pTA) workshop attended by diverse stakeholders of the Phoenix 52nd Street Superfund Site evaluating the ISMA's ability for addressing a real-world problem. In addition to receiving valuable feedback on perceived ISMA limitations, I conclude from the workshop that pTA can facilitate mutual learning even among entrenched stakeholders. In summary, my doctoral research (i) pinpointed limitations of current remedial design approaches, (ii) produced a novel alternative approach, and (iii) demonstrated the technical, economical and social value of this novel remedial design tool, i.e., the In Situ Microcosm Array technology.
ContributorsKalinowski, Tomasz (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2013
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152207-Thumbnail Image.png
Description
Current policies subsidizing or accelerating deployment of photovoltaics (PV) are typically motivated by claims of environmental benefit, such as the reduction of CO2 emissions generated by the fossil-fuel fired power plants that PV is intended to displace. Existing practice is to assess these environmental benefits on a net life-cycle basis,

Current policies subsidizing or accelerating deployment of photovoltaics (PV) are typically motivated by claims of environmental benefit, such as the reduction of CO2 emissions generated by the fossil-fuel fired power plants that PV is intended to displace. Existing practice is to assess these environmental benefits on a net life-cycle basis, where CO2 benefits occurring during use of the PV panels is found to exceed emissions generated during the PV manufacturing phase including materials extraction and manufacture of the PV panels prior to installation. However, this approach neglects to recognize that the environmental costs of CO2 release during manufacture are incurred early, while environmental benefits accrue later. Thus, where specific policy targets suggest meeting CO2 reduction targets established by a certain date, rapid PV deployment may have counter-intuitive, albeit temporary, undesired consequences. Thus, on a cumulative radiative forcing (CRF) basis, the environmental improvements attributable to PV might be realized much later than is currently understood. This phenomenon is particularly acute when PV manufacture occurs in areas using CO2 intensive energy sources (e.g., coal), but deployment occurs in areas with less CO2 intensive electricity sources (e.g., hydro). This thesis builds a dynamic Cumulative Radiative Forcing (CRF) model to examine the inter-temporal warming impacts of PV deployments in three locations: California, Wyoming and Arizona. The model includes the following factors that impact CRF: PV deployment rate, choice of PV technology, pace of PV technology improvements, and CO2 intensity in the electricity mix at manufacturing and deployment locations. Wyoming and California show the highest and lowest CRF benefits as they have the most and least CO2 intensive grids, respectively. CRF payback times are longer than CO2 payback times in all cases. Thin film, CdTe PV technologies have the lowest manufacturing CO2 emissions and therefore the shortest CRF payback times. This model can inform policies intended to fulfill time-sensitive CO2 mitigation goals while minimizing short term radiative forcing.
ContributorsTriplican Ravikumar, Dwarakanath (Author) / Seager, Thomas P (Thesis advisor) / Fraser, Matthew P (Thesis advisor) / Chester, Mikhail V (Committee member) / Sinha, Parikhit (Committee member) / Arizona State University (Publisher)
Created2013
152189-Thumbnail Image.png
Description
This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’

This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’ test scores as outcome variables and teachers’ contributions as random effects to ascribe changes in student performance to the teachers who have taught them. The VAMs teacher score is the empirical best linear unbiased predictor (EBLUP). This approach is limited by the adequacy of the assumed model specification with respect to the unknown underlying model. In that regard, this study proposes alternative ways to rank teacher effects that are not dependent on a given model by introducing two variable importance measures (VIMs), the node-proportion and the covariate-proportion. These VIMs are novel because they take into account the final configuration of the terminal nodes in the constitutive trees in a random forest. In a simulation study, under a variety of conditions, true rankings of teacher effects are compared with estimated rankings obtained using three sources: the newly proposed VIMs, existing VIMs, and EBLUPs from the assumed linear model specification. The newly proposed VIMs outperform all others in various scenarios where the model was misspecified. The second study develops two novel interaction measures. These measures could be used within but are not restricted to the VAM framework. The distribution-based measure is constructed to identify interactions in a general setting where a model specification is not assumed in advance. In turn, the mean-based measure is built to estimate interactions when the model specification is assumed to be linear. Both measures are unique in their construction; they take into account not only the outcome values, but also the internal structure of the trees in a random forest. In a separate simulation study, under a variety of conditions, the proposed measures are found to identify and estimate second-order interactions.
ContributorsValdivia, Arturo (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
152255-Thumbnail Image.png
Description
Many manmade chemicals used in consumer products are ultimately washed down the drain and are collected in municipal sewers. Efficient chemical monitoring at wastewater treatment (WWT) plants thus may provide up-to-date information on chemical usage rates for epidemiological assessments. The objective of the present study was to extrapolate this concept,

Many manmade chemicals used in consumer products are ultimately washed down the drain and are collected in municipal sewers. Efficient chemical monitoring at wastewater treatment (WWT) plants thus may provide up-to-date information on chemical usage rates for epidemiological assessments. The objective of the present study was to extrapolate this concept, termed 'sewage epidemiology', to include municipal sewage sludge (MSS) in identifying and prioritizing contaminants of emerging concern (CECs). To test this the following specific aims were defined: i) to screen and identify CECs in nationally representative samples of MSS and to provide nationwide inventories of CECs in U.S. MSS; ii) to investigate the fate and persistence in MSS-amended soils, of sludge-borne hydrophobic CECs; and iii) to develop an analytical tool relying on contaminant levels in MSS as an indicator for identifying and prioritizing hydrophobic CECs. Chemicals that are primarily discharged to the sewage systems (alkylphenol surfactants) and widespread persistent organohalogen pollutants (perfluorochemicals and brominated flame retardants) were analyzed in nationally representative MSS samples. A meta-analysis showed that CECs contribute about 0.04-0.15% to the total dry mass of MSS, a mass equivalent of 2,700-7,900 metric tonnes of chemicals annually. An analysis of archived mesocoms from a sludge weathering study showed that 64 CECs persisted in MSS/soil mixtures over the course of the experiment, with half-lives ranging between 224 and >990 days; these results suggest an inherent persistence of CECs that accumulate in MSS. A comparison of the spectrum of chemicals (n=52) analyzed in nationally representative biological specimens from humans and MSS revealed 70% overlap. This observed co-occurrence of contaminants in both matrices suggests that MSS may serve as an indicator for ongoing human exposures and body burdens of pollutants in humans. In conclusion, I posit that this novel approach in sewage epidemiology may serve to pre-screen and prioritize the several thousands of known or suspected CECs to identify those that are most prone to pose a risk to human health and the environment.
ContributorsVenkatesan, Arjunkrishna (Author) / Halden, Rolf U. (Thesis advisor) / Westerhoff, Paul (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2013
151911-Thumbnail Image.png
Description
Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts,

Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts, (2) determine the effect of water quality parameters (e.g., pH), (3) conduct responsible engineering by ensuring detection methods were in place for novel materials, and (4) develop a conceptual framework for designing nitrate-specific photocatalysts. The key issues for implementing photocatalysis for nitrate drinking water treatment were efficient nitrate removal at neutral pH and by-product selectivity toward nitrogen gases, rather than by-products that pose a human health concern (e.g., nitrite). Photocatalytic nitrate reduction was found to follow a series of proton-coupled electron transfers. The nitrate reduction rate was limited by the electron-hole recombination rate, and the addition of an electron donor (e.g., formate) was necessary to reduce the recombination rate and achieve efficient nitrate removal. Nano-sized photocatalysts with high surface areas mitigated the negative effects of competing aqueous anions. The key water quality parameter impacting by-product selectivity was pH. For pH < 4, the by-product selectivity was mostly N-gas with some NH4+, but this shifted to NO2- above pH = 4, which suggests the need for proton localization to move beyond NO2-. Co-catalysts that form a Schottky barrier, allowing for localization of electrons, were best for nitrate reduction. Silver was optimal in heterogeneous systems because of its ability to improve nitrate reduction activity and N-gas by-product selectivity, and graphene was optimal in two-electrode systems because of its ability to shuttle electrons to the working electrode. "Environmentally responsible use of nanomaterials" is to ensure that detection methods are in place for the nanomaterials tested. While methods exist for the metals and metal oxides examined, there are currently none for carbon nanotubes (CNTs) and graphene. Acknowledging that risk assessment encompasses dose-response and exposure, new analytical methods were developed for extracting and detecting CNTs and graphene in complex organic environmental (e.g., urban air) and biological matrices (e.g. rat lungs).
ContributorsDoudrick, Kyle (Author) / Westerhoff, Paul (Thesis advisor) / Halden, Rolf (Committee member) / Hristovski, Kiril (Committee member) / Arizona State University (Publisher)
Created2013
152058-Thumbnail Image.png
Description
There is growing concern over the future availability of water for electricity generation. Because of a rapidly growing population coupled with an arid climate, the Western United States faces a particularly acute water/energy challenge, as installation of new electricity capacity is expected to be required in the areas with the

There is growing concern over the future availability of water for electricity generation. Because of a rapidly growing population coupled with an arid climate, the Western United States faces a particularly acute water/energy challenge, as installation of new electricity capacity is expected to be required in the areas with the most limited water availability. Electricity trading is anticipated to be an important strategy for avoiding further local water stress, especially during drought and in the areas with the most rapidly growing populations. Transfers of electricity imply transfers of "virtual water" - water required for the production of a product. Yet, as a result of sizable demand growth, there may not be excess capacity in the system to support trade as an adaptive response to long lasting drought. As the grid inevitably expands capacity due to higher demand, or adapts to anticipated climate change, capacity additions should be selected and sited to increase system resilience to drought. This paper explores the tradeoff between virtual water and local water/energy infrastructure development for the purpose of enhancing the Western US power grid's resilience to drought. A simple linear model is developed that estimates the economically optimal configuration of the Western US power grid given water constraints. The model indicates that natural gas combined cycle power plants combined with increased interstate trade in power and virtual water provide the greatest opportunity for cost effective and water efficient grid expansion. Such expansion, as well as drought conditions, may shift and increase virtual water trade patterns, as states with ample water resources and a competitive advantage in developing power sources become net exporters, and states with limited water or higher costs become importers.
ContributorsHerron, Seth (Author) / Ruddell, Benjamin L (Thesis advisor) / Ariaratnam, Samuel (Thesis advisor) / Allenby, Braden (Committee member) / Williams, Eric (Committee member) / Arizona State University (Publisher)
Created2013
151992-Thumbnail Image.png
Description
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools;

Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
ContributorsReichenberg, Ray E (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Arizona State University (Publisher)
Created2013