Matching Items (713)
Filtering by

Clear all filters

151889-Thumbnail Image.png
Description
This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for

This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for many contaminated groundwater sites. However, site-specific variability necessitates the performance of small-scale treatability studies prior to full-scale implementation. The most common methodology is the batch microcosm, whose potential limitations and suitable technical alternatives are explored in this thesis. In a critical literature review, I discuss how continuous-flow conditions stimulate microbial attachment and biofilm formation, and identify unique microbiological phenomena largely absent in batch bottles, yet potentially relevant to contaminant fate. Following up on this theoretical evaluation, I experimentally produce pyrosequencing data and perform beta diversity analysis to demonstrate that batch and continuous-flow (column) microcosms foster distinctly different microbial communities. Next, I introduce the In Situ Microcosm Array (ISMA), which took approximately two years to design, develop, build and iteratively improve. The ISMA can be deployed down-hole in groundwater monitoring wells of contaminated aquifers for the purpose of autonomously conducting multiple parallel continuous-flow treatability experiments. The ISMA stores all sample generated in the course of each experiment, thereby preventing the release of chemicals into the environment. Detailed results are presented from an ISMA demonstration evaluating ISB for the treatment of hexavalent chromium and trichloroethene. In a technical and economical comparison to batch microcosms, I demonstrate the ISMA is both effective in informing remedial design decisions and cost-competitive. Finally, I report on a participatory technology assessment (pTA) workshop attended by diverse stakeholders of the Phoenix 52nd Street Superfund Site evaluating the ISMA's ability for addressing a real-world problem. In addition to receiving valuable feedback on perceived ISMA limitations, I conclude from the workshop that pTA can facilitate mutual learning even among entrenched stakeholders. In summary, my doctoral research (i) pinpointed limitations of current remedial design approaches, (ii) produced a novel alternative approach, and (iii) demonstrated the technical, economical and social value of this novel remedial design tool, i.e., the In Situ Microcosm Array technology.
ContributorsKalinowski, Tomasz (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152207-Thumbnail Image.png
Description
Current policies subsidizing or accelerating deployment of photovoltaics (PV) are typically motivated by claims of environmental benefit, such as the reduction of CO2 emissions generated by the fossil-fuel fired power plants that PV is intended to displace. Existing practice is to assess these environmental benefits on a net life-cycle basis,

Current policies subsidizing or accelerating deployment of photovoltaics (PV) are typically motivated by claims of environmental benefit, such as the reduction of CO2 emissions generated by the fossil-fuel fired power plants that PV is intended to displace. Existing practice is to assess these environmental benefits on a net life-cycle basis, where CO2 benefits occurring during use of the PV panels is found to exceed emissions generated during the PV manufacturing phase including materials extraction and manufacture of the PV panels prior to installation. However, this approach neglects to recognize that the environmental costs of CO2 release during manufacture are incurred early, while environmental benefits accrue later. Thus, where specific policy targets suggest meeting CO2 reduction targets established by a certain date, rapid PV deployment may have counter-intuitive, albeit temporary, undesired consequences. Thus, on a cumulative radiative forcing (CRF) basis, the environmental improvements attributable to PV might be realized much later than is currently understood. This phenomenon is particularly acute when PV manufacture occurs in areas using CO2 intensive energy sources (e.g., coal), but deployment occurs in areas with less CO2 intensive electricity sources (e.g., hydro). This thesis builds a dynamic Cumulative Radiative Forcing (CRF) model to examine the inter-temporal warming impacts of PV deployments in three locations: California, Wyoming and Arizona. The model includes the following factors that impact CRF: PV deployment rate, choice of PV technology, pace of PV technology improvements, and CO2 intensity in the electricity mix at manufacturing and deployment locations. Wyoming and California show the highest and lowest CRF benefits as they have the most and least CO2 intensive grids, respectively. CRF payback times are longer than CO2 payback times in all cases. Thin film, CdTe PV technologies have the lowest manufacturing CO2 emissions and therefore the shortest CRF payback times. This model can inform policies intended to fulfill time-sensitive CO2 mitigation goals while minimizing short term radiative forcing.
ContributorsTriplican Ravikumar, Dwarakanath (Author) / Seager, Thomas P (Thesis advisor) / Fraser, Matthew P (Thesis advisor) / Chester, Mikhail V (Committee member) / Sinha, Parikhit (Committee member) / Arizona State University (Publisher)
Created2013
152255-Thumbnail Image.png
Description
Many manmade chemicals used in consumer products are ultimately washed down the drain and are collected in municipal sewers. Efficient chemical monitoring at wastewater treatment (WWT) plants thus may provide up-to-date information on chemical usage rates for epidemiological assessments. The objective of the present study was to extrapolate this concept,

Many manmade chemicals used in consumer products are ultimately washed down the drain and are collected in municipal sewers. Efficient chemical monitoring at wastewater treatment (WWT) plants thus may provide up-to-date information on chemical usage rates for epidemiological assessments. The objective of the present study was to extrapolate this concept, termed 'sewage epidemiology', to include municipal sewage sludge (MSS) in identifying and prioritizing contaminants of emerging concern (CECs). To test this the following specific aims were defined: i) to screen and identify CECs in nationally representative samples of MSS and to provide nationwide inventories of CECs in U.S. MSS; ii) to investigate the fate and persistence in MSS-amended soils, of sludge-borne hydrophobic CECs; and iii) to develop an analytical tool relying on contaminant levels in MSS as an indicator for identifying and prioritizing hydrophobic CECs. Chemicals that are primarily discharged to the sewage systems (alkylphenol surfactants) and widespread persistent organohalogen pollutants (perfluorochemicals and brominated flame retardants) were analyzed in nationally representative MSS samples. A meta-analysis showed that CECs contribute about 0.04-0.15% to the total dry mass of MSS, a mass equivalent of 2,700-7,900 metric tonnes of chemicals annually. An analysis of archived mesocoms from a sludge weathering study showed that 64 CECs persisted in MSS/soil mixtures over the course of the experiment, with half-lives ranging between 224 and >990 days; these results suggest an inherent persistence of CECs that accumulate in MSS. A comparison of the spectrum of chemicals (n=52) analyzed in nationally representative biological specimens from humans and MSS revealed 70% overlap. This observed co-occurrence of contaminants in both matrices suggests that MSS may serve as an indicator for ongoing human exposures and body burdens of pollutants in humans. In conclusion, I posit that this novel approach in sewage epidemiology may serve to pre-screen and prioritize the several thousands of known or suspected CECs to identify those that are most prone to pose a risk to human health and the environment.
ContributorsVenkatesan, Arjunkrishna (Author) / Halden, Rolf U. (Thesis advisor) / Westerhoff, Paul (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2013
151911-Thumbnail Image.png
Description
Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts,

Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts, (2) determine the effect of water quality parameters (e.g., pH), (3) conduct responsible engineering by ensuring detection methods were in place for novel materials, and (4) develop a conceptual framework for designing nitrate-specific photocatalysts. The key issues for implementing photocatalysis for nitrate drinking water treatment were efficient nitrate removal at neutral pH and by-product selectivity toward nitrogen gases, rather than by-products that pose a human health concern (e.g., nitrite). Photocatalytic nitrate reduction was found to follow a series of proton-coupled electron transfers. The nitrate reduction rate was limited by the electron-hole recombination rate, and the addition of an electron donor (e.g., formate) was necessary to reduce the recombination rate and achieve efficient nitrate removal. Nano-sized photocatalysts with high surface areas mitigated the negative effects of competing aqueous anions. The key water quality parameter impacting by-product selectivity was pH. For pH < 4, the by-product selectivity was mostly N-gas with some NH4+, but this shifted to NO2- above pH = 4, which suggests the need for proton localization to move beyond NO2-. Co-catalysts that form a Schottky barrier, allowing for localization of electrons, were best for nitrate reduction. Silver was optimal in heterogeneous systems because of its ability to improve nitrate reduction activity and N-gas by-product selectivity, and graphene was optimal in two-electrode systems because of its ability to shuttle electrons to the working electrode. "Environmentally responsible use of nanomaterials" is to ensure that detection methods are in place for the nanomaterials tested. While methods exist for the metals and metal oxides examined, there are currently none for carbon nanotubes (CNTs) and graphene. Acknowledging that risk assessment encompasses dose-response and exposure, new analytical methods were developed for extracting and detecting CNTs and graphene in complex organic environmental (e.g., urban air) and biological matrices (e.g. rat lungs).
ContributorsDoudrick, Kyle (Author) / Westerhoff, Paul (Thesis advisor) / Halden, Rolf (Committee member) / Hristovski, Kiril (Committee member) / Arizona State University (Publisher)
Created2013
152058-Thumbnail Image.png
Description
There is growing concern over the future availability of water for electricity generation. Because of a rapidly growing population coupled with an arid climate, the Western United States faces a particularly acute water/energy challenge, as installation of new electricity capacity is expected to be required in the areas with the

There is growing concern over the future availability of water for electricity generation. Because of a rapidly growing population coupled with an arid climate, the Western United States faces a particularly acute water/energy challenge, as installation of new electricity capacity is expected to be required in the areas with the most limited water availability. Electricity trading is anticipated to be an important strategy for avoiding further local water stress, especially during drought and in the areas with the most rapidly growing populations. Transfers of electricity imply transfers of "virtual water" - water required for the production of a product. Yet, as a result of sizable demand growth, there may not be excess capacity in the system to support trade as an adaptive response to long lasting drought. As the grid inevitably expands capacity due to higher demand, or adapts to anticipated climate change, capacity additions should be selected and sited to increase system resilience to drought. This paper explores the tradeoff between virtual water and local water/energy infrastructure development for the purpose of enhancing the Western US power grid's resilience to drought. A simple linear model is developed that estimates the economically optimal configuration of the Western US power grid given water constraints. The model indicates that natural gas combined cycle power plants combined with increased interstate trade in power and virtual water provide the greatest opportunity for cost effective and water efficient grid expansion. Such expansion, as well as drought conditions, may shift and increase virtual water trade patterns, as states with ample water resources and a competitive advantage in developing power sources become net exporters, and states with limited water or higher costs become importers.
ContributorsHerron, Seth (Author) / Ruddell, Benjamin L (Thesis advisor) / Ariaratnam, Samuel (Thesis advisor) / Allenby, Braden (Committee member) / Williams, Eric (Committee member) / Arizona State University (Publisher)
Created2013
151951-Thumbnail Image.png
Description
The consumption of feedstocks from agriculture and forestry by current biofuel production has raised concerns about food security and land availability. In the meantime, intensive human activities have created a large amount of marginal lands that require management. This study investigated the viability of aligning land management with biofuel production

The consumption of feedstocks from agriculture and forestry by current biofuel production has raised concerns about food security and land availability. In the meantime, intensive human activities have created a large amount of marginal lands that require management. This study investigated the viability of aligning land management with biofuel production on marginal lands. Biofuel crop production on two types of marginal lands, namely urban vacant lots and abandoned mine lands (AMLs), were assessed. The investigation of biofuel production on urban marginal land was carried out in Pittsburgh between 2008 and 2011, using the sunflower gardens developed by a Pittsburgh non-profit as an example. Results showed that the crops from urban marginal lands were safe for biofuel. The crop yield was 20% of that on agricultural land while the low input agriculture was used in crop cultivation. The energy balance analysis demonstrated that the sunflower gardens could produce a net energy return even at the current low yield. Biofuel production on AML was assessed from experiments conducted in a greenhouse for sunflower, soybean, corn, canola and camelina. The research successfully created an industrial symbiosis by using bauxite as soil amendment to enable plant growth on very acidic mine refuse. Phytoremediation and soil amendments were found to be able to effectively reduce contamination in the AML and its runoff. Results from this research supported that biofuel production on marginal lands could be a unique and feasible option for cultivating biofuel feedstocks.
ContributorsZhao, Xi (Author) / Landis, Amy (Thesis advisor) / Fox, Peter (Committee member) / Chester, Mikhail (Committee member) / Arizona State University (Publisher)
Created2013
151780-Thumbnail Image.png
Description
Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation

Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation of a software solution which can be used in the academia and industry for research in cyber physical systems related applications. The major features of the project are: creating a modular system for motion planning, use of Robot Operating System (ROS), use of triangulation for environment decomposition and using stargazer sensor for localization. The project is built on an open source software called ROS which provides an environment where it is very easy to integrate different modules be it software or hardware on a Linux based platform. Use of ROS implies the project or its modules can be adapted quickly for different applications as the need arises. The final software package created and tested takes a data file as its input which contains the LTL specifications, a symbols list used in the LTL and finally the environment polygon data containing real world coordinates for all polygons and also information on neighbors and parents of each polygon. The software package successfully ran the experiment of coverage, reachability with avoidance and sequencing.
ContributorsPandya, Parth (Author) / Fainekos, Georgios (Thesis advisor) / Dasgupta, Partha (Committee member) / Lee, Yann-Hang (Committee member) / Arizona State University (Publisher)
Created2013
151784-Thumbnail Image.png
Description
This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be

This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be interested in the following questions: 1-Is SZNA occurring and what processes contribute? 2-What are the current SZNA rates? 3-What are the longer-term implications? The approach is macroscopic and uses multiple lines-of-evidence. An in-depth application of the generalized non-site specific method over multiple site events, with sampling refinement approaches applied for improving SZNA estimates, at three CAH impacted sites is presented with a focus on discharge rates for four events over approximately three years (Site 1:2.9, 8.4, 4.9, 2.8kg/yr as PCE, Site 2:1.6, 2.2, 1.7, 1.1kg/y as PCE, Site 3:570, 590, 250, 240kg/y as TCE). When applying the generalized CAH-SZNA method, it is likely that different practitioners will not sample a site similarly, especially regarding sampling density on a groundwater transect. Calculation of SZNA rates is affected by contaminant spatial variability with reference to transect sampling intervals and density with variations in either resulting in different mass discharge estimates. The effects on discharge estimates from varied sampling densities and spacings were examined to develop heuristic sampling guidelines with practical site sampling densities; the guidelines aim to reduce the variability in discharge estimates due to different sampling approaches and to improve confidence in SZNA rates allowing decision-makers to place the rates in perspective and determine a course of action based on remedial goals. Finally bench scale testing was used to address longer term questions; specifically the nature and extent of source architecture. A rapid in-situ disturbance method was developed using a bench-scale apparatus. The approach allows for rapid identification of the presence of DNAPL using several common pilot scale technologies (ISCO, air-sparging, water-injection) and can identify relevant source architectural features (ganglia, pools, dissolved source). Understanding of source architecture and identification of DNAPL containing regions greatly enhances site conceptualization models, improving estimated time frames for SZNA, and possibly improving design of remedial systems.
ContributorsEkre, Ryan (Author) / Johnson, Paul Carr (Thesis advisor) / Rittmann, Bruce (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2013