This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 71
151889-Thumbnail Image.png
Description
This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for

This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for many contaminated groundwater sites. However, site-specific variability necessitates the performance of small-scale treatability studies prior to full-scale implementation. The most common methodology is the batch microcosm, whose potential limitations and suitable technical alternatives are explored in this thesis. In a critical literature review, I discuss how continuous-flow conditions stimulate microbial attachment and biofilm formation, and identify unique microbiological phenomena largely absent in batch bottles, yet potentially relevant to contaminant fate. Following up on this theoretical evaluation, I experimentally produce pyrosequencing data and perform beta diversity analysis to demonstrate that batch and continuous-flow (column) microcosms foster distinctly different microbial communities. Next, I introduce the In Situ Microcosm Array (ISMA), which took approximately two years to design, develop, build and iteratively improve. The ISMA can be deployed down-hole in groundwater monitoring wells of contaminated aquifers for the purpose of autonomously conducting multiple parallel continuous-flow treatability experiments. The ISMA stores all sample generated in the course of each experiment, thereby preventing the release of chemicals into the environment. Detailed results are presented from an ISMA demonstration evaluating ISB for the treatment of hexavalent chromium and trichloroethene. In a technical and economical comparison to batch microcosms, I demonstrate the ISMA is both effective in informing remedial design decisions and cost-competitive. Finally, I report on a participatory technology assessment (pTA) workshop attended by diverse stakeholders of the Phoenix 52nd Street Superfund Site evaluating the ISMA's ability for addressing a real-world problem. In addition to receiving valuable feedback on perceived ISMA limitations, I conclude from the workshop that pTA can facilitate mutual learning even among entrenched stakeholders. In summary, my doctoral research (i) pinpointed limitations of current remedial design approaches, (ii) produced a novel alternative approach, and (iii) demonstrated the technical, economical and social value of this novel remedial design tool, i.e., the In Situ Microcosm Array technology.
ContributorsKalinowski, Tomasz (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151784-Thumbnail Image.png
Description
This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be

This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be interested in the following questions: 1-Is SZNA occurring and what processes contribute? 2-What are the current SZNA rates? 3-What are the longer-term implications? The approach is macroscopic and uses multiple lines-of-evidence. An in-depth application of the generalized non-site specific method over multiple site events, with sampling refinement approaches applied for improving SZNA estimates, at three CAH impacted sites is presented with a focus on discharge rates for four events over approximately three years (Site 1:2.9, 8.4, 4.9, 2.8kg/yr as PCE, Site 2:1.6, 2.2, 1.7, 1.1kg/y as PCE, Site 3:570, 590, 250, 240kg/y as TCE). When applying the generalized CAH-SZNA method, it is likely that different practitioners will not sample a site similarly, especially regarding sampling density on a groundwater transect. Calculation of SZNA rates is affected by contaminant spatial variability with reference to transect sampling intervals and density with variations in either resulting in different mass discharge estimates. The effects on discharge estimates from varied sampling densities and spacings were examined to develop heuristic sampling guidelines with practical site sampling densities; the guidelines aim to reduce the variability in discharge estimates due to different sampling approaches and to improve confidence in SZNA rates allowing decision-makers to place the rates in perspective and determine a course of action based on remedial goals. Finally bench scale testing was used to address longer term questions; specifically the nature and extent of source architecture. A rapid in-situ disturbance method was developed using a bench-scale apparatus. The approach allows for rapid identification of the presence of DNAPL using several common pilot scale technologies (ISCO, air-sparging, water-injection) and can identify relevant source architectural features (ganglia, pools, dissolved source). Understanding of source architecture and identification of DNAPL containing regions greatly enhances site conceptualization models, improving estimated time frames for SZNA, and possibly improving design of remedial systems.
ContributorsEkre, Ryan (Author) / Johnson, Paul Carr (Thesis advisor) / Rittmann, Bruce (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2013
152004-Thumbnail Image.png
Description
To further the efforts producing energy from more renewable sources, microbial electrochemical cells (MXCs) can utilize anode respiring bacteria (ARB) to couple the oxidation of an organic substrate to the delivery of electrons to the anode. Although ARB such as Geobacter and Shewanella have been well-studied in terms of their

To further the efforts producing energy from more renewable sources, microbial electrochemical cells (MXCs) can utilize anode respiring bacteria (ARB) to couple the oxidation of an organic substrate to the delivery of electrons to the anode. Although ARB such as Geobacter and Shewanella have been well-studied in terms of their microbiology and electrochemistry, much is still unknown about the mechanism of electron transfer to the anode. To this end, this thesis seeks to elucidate the complexities of electron transfer existing in Geobacter sulfurreducens biofilms by employing Electrochemical Impedance Spectroscopy (EIS) as the tool of choice. Experiments measuring EIS resistances as a function of growth were used to uncover the potential gradients that emerge in biofilms as they grow and become thicker. While a better understanding of this model ARB is sought, electrochemical characterization of a halophile, Geoalkalibacter subterraneus (Glk. subterraneus), revealed that this organism can function as an ARB and produce seemingly high current densities while consuming different organic substrates, including acetate, butyrate, and glycerol. The importance of identifying and studying novel ARB for broader MXC applications was stressed in this thesis as a potential avenue for tackling some of human energy problems.
ContributorsAjulo, Oluyomi (Author) / Torres, Cesar (Thesis advisor) / Nielsen, David (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Popat, Sudeep (Committee member) / Arizona State University (Publisher)
Created2013
151669-Thumbnail Image.png
Description
In situ remediation of contaminated aquifers, specifically in situ bioremediation (ISB), has gained popularity over pump-and-treat operations. It represents a more sustainable approach that can also achieve complete mineralization of contaminants in the subsurface. However, the subsurface reality is very complex, characterized by hydrodynamic groundwater movement, geological heterogeneity, and mass-transfer

In situ remediation of contaminated aquifers, specifically in situ bioremediation (ISB), has gained popularity over pump-and-treat operations. It represents a more sustainable approach that can also achieve complete mineralization of contaminants in the subsurface. However, the subsurface reality is very complex, characterized by hydrodynamic groundwater movement, geological heterogeneity, and mass-transfer phenomena governing contaminant transport and bioavailability. These phenomena cannot be properly studied using commonly conducted laboratory batch microcosms lacking realistic representation of the processes named above. Instead, relevant processes are better understood by using flow-through systems (sediment columns). However, flow-through column studies are typically conducted without replicates. Due to additional sources of variability (e.g., flow rate variation between columns and over time), column studies are expected to be less reproducible than simple batch microcosms. This was assessed through a comprehensive statistical analysis of results from multiple batch and column studies. Anaerobic microbial biotransformations of trichloroethene and of perchlorate were chosen as case studies. Results revealed that no statistically significant differences were found between reproducibility of batch and column studies. It has further been recognized that laboratory studies cannot accurately reproduce many phenomena encountered in the field. To overcome this limitation, a down-hole diagnostic device (in situ microcosm array - ISMA) was developed, that enables the autonomous operation of replicate flow-through sediment columns in a realistic aquifer setting. Computer-aided design (CAD), rapid prototyping, and computer numerical control (CNC) machining were used to create a tubular device enabling practitioners to conduct conventional sediment column studies in situ. A case study where two remediation strategies, monitored natural attenuation and bioaugmentation with concomitant biostimulation, were evaluated in the laboratory and in situ at a perchlorate-contaminated site. Findings demonstrate the feasibility of evaluating anaerobic bioremediation in a moderately aerobic aquifer. They further highlight the possibility of mimicking in situ remediation strategies on the small-scale in situ. The ISMA is the first device offering autonomous in situ operation of conventional flow-through sediment microcosms and producing statistically significant data through the use of multiple replicates. With its sustainable approach to treatability testing and data gathering, the ISMA represents a versatile addition to the toolbox of scientists and engineers.
ContributorsMcClellan, Kristin (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2013
152585-Thumbnail Image.png
Description
Uranium (U) contamination has been attracting public concern, and many researchers are investigating principles and applications of U remediation. The overall goal of my research is to understand the versatile roles of sulfate-reducing bacteria (SRB) in uranium bioremediation, including direct involvement (reducing U) and indirect involvement (protecting U reoxidation). I

Uranium (U) contamination has been attracting public concern, and many researchers are investigating principles and applications of U remediation. The overall goal of my research is to understand the versatile roles of sulfate-reducing bacteria (SRB) in uranium bioremediation, including direct involvement (reducing U) and indirect involvement (protecting U reoxidation). I pursue this goal by studying Desulfovibro vuglaris, a representative SRB. For direct involvement, I performed experiments on uranium bioreduction and uraninite (UO2) production in batch tests and in a H2-based membrane biofilm reactor (MBfR) inoculated with D. vuglaris. In summary, D. vuglaris was able to immobilize soluble U(VI) by enzymatically reducing it to insoluble U(IV), and the nanocrystallinte UO2 was associated with the biomass. In the MBfR system, although D. vuglaris failed to form a biofilm, other microbial groups capable of U(VI) reduction formed a biofilm, and up to 95% U removal was achieved during a long-term operation. For the indirect involvement, I studied the production and characterization of and biogenic iron sulfide (FeS) in batch tests. In summary, D. vuglaris produced nanocrystalline FeS, a potential redox buffer to protect UO2 from remobilization by O2. My results demonstrate that a variety of controllable environmental parameters, including pH, free sulfide, and types of Fe sources and electron donors, significantly determined the characteristics of both biogenic solids, and those characteristics should affect U-sequestrating performance by SRB. Overall, my results provide a baseline for exploiting effective and sustainable approaches to U bioremediation, including the application of the novel MBfR technology to U sequestration from groundwater and biogenic FeS for protecting remobilization of sequestrated U, as well as the microbe-relevant tools to optimize U sequestration applicable in reality.
ContributorsZhou, Chen (Author) / Rittmann, Bruce E. (Thesis advisor) / Krajmalnik-Brown, Rosa (Committee member) / Torres, César I (Committee member) / Arizona State University (Publisher)
Created2014
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014
153076-Thumbnail Image.png
Description
Nitrate, a widespread contaminant in surface water, can cause eutrophication and toxicity to aquatic organisms. To augment the nitrate-removal capacity of constructed wetlands, I applied the H2-based Membrane Biofilm Reactor (MBfR) in a novel configuration called the in situ MBfR (isMBfR). The goal of my thesis is to

Nitrate, a widespread contaminant in surface water, can cause eutrophication and toxicity to aquatic organisms. To augment the nitrate-removal capacity of constructed wetlands, I applied the H2-based Membrane Biofilm Reactor (MBfR) in a novel configuration called the in situ MBfR (isMBfR). The goal of my thesis is to evaluate and model the nitrate removal performance for a bench-scale isMBfR system.

I operated the bench-scale isMBfR system in 7 different conditions to evaluate its nitrate-removal performance. When I supplied H2 with the isMBfR (stages 1 - 6), I observed at least 70% nitrate removal, and almost all of the denitrification occurred in the "MBfR zone." When I stopped the H2 supply in stage 7, the nitrate-removal percentage immediately dropped from 92% (stage 6) to 11% (stage 7). Denitrification raised the pH of the bulk liquid to ~ 9.0 for the first 6 stages, but the high pH did not impair the performance of the denitrifiers. Microbial community analyses indicated that DB were the dominant bacteria in the "MBfR zone," while photosynthetic Cyanobacteria were dominant in the "photo-zone".

I derived stoichiometric relationships among COD, alkalinity, H2, Dissolved Oxygen (DO), and nitrate to model the nitrate removal capacity of the "MBfR zone." The stoichiometric relationships corresponded well to the nitrate-removal capacity for all stages expect stage 3, which was limited by the abundance of Denitrifying Bacteria (DB) so that the H2 supply capacity could not be completely used.

Finally, I analyzed two case studies for the real-world application of the isMBfR to constructed wetlands. Based on the characteristics for the wetlands and the stoichiometric relationships, I designed a feasible operation condition (membrane area and H2 pressure) for each wetland. In both cases, the amount of isMBfR surface area was modest, from 0.022 to 1.2 m2/m3 of wetland volume.
ContributorsLi, Yizhou (Author) / Rittmann, Bruce (Thesis advisor) / Vivoni, Enrique (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2014