This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 110
152136-Thumbnail Image.png
Description
Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving

Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving the rates of reductive dechlorination and the growth of Dehalococcoides in mixed communities. Biostimulation of contaminated sites or microcosms with electron donor fails to consistently promote dechlorination of PCE/TCE beyond cis-dichloroethene (cis-DCE), even when the presence of Dehalococcoides is confirmed. Supported by data from microcosm experiments, I showed that the stalling at cis-DCE is due a H2 competition in which components of the soil or sediment serve as electron acceptors for competing microorganisms. However, once competition was minimized by providing selective enrichment techniques, I illustrated how to obtain both fast rates and high-density Dehalococcoides using three distinct enrichment cultures. Having achieved a heightened awareness of the fierce competition for electron donor, I then identified bicarbonate (HCO3-) as a potential H2 sink for reductive dechlorination. HCO3- is the natural buffer in groundwater but also the electron acceptor for hydrogenotrophic methanogens and homoacetogens, two microbial groups commonly encountered with Dehalococcoides. By testing a range of concentrations in batch experiments, I showed that methanogens are favored at low HCO3 and homoacetogens at high HCO3-. The high HCO3- concentrations increased the H2 demand which negatively affected the rates and extent of dechlorination. By applying the gained knowledge on microbial community management, I ran the first successful continuous stirred-tank reactor (CSTR) at a 3-d hydraulic retention time for cultivation of dechlorinating cultures. I demonstrated that using carefully selected conditions in a CSTR, cultivation of Dehalococcoides at short retention times is feasible, resulting in robust cultures capable of fast dechlorination. Lastly, I provide a systematic insight into the effect of high ammonia on communities involved in dechlorination of chloroethenes. This work documents the potential use of landfill leachate as a substrate for dechlorination and an increased tolerance of Dehalococcoides to high ammonia concentrations (2 g L-1 NH4+-N) without loss of the ability to dechlorinate TCE to ethene.
ContributorsDelgado, Anca Georgiana (Author) / Krajmalnik-Brown, Rosa (Thesis advisor) / Cadillo-Quiroz, Hinsby (Committee member) / Halden, Rolf U. (Committee member) / Rittmann, Bruce E. (Committee member) / Stout, Valerie (Committee member) / Arizona State University (Publisher)
Created2013
152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013
151941-Thumbnail Image.png
Description
With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate

With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate software developers to leverage these hardware techniques and improve energy efficiency of the system. To achieve this, I propose two solutions for Linux kernel: Optimal use of these architectural enhancements to achieve greater energy efficiency requires accurate modeling of processor power consumption. Though there are many models available in literature to model processor power consumption, there is a lack of such models to capture power consumption at the task-level. Task-level energy models are a requirement for an operating system (OS) to perform real-time power management as OS time multiplexes tasks to enable sharing of hardware resources. I propose a detailed design methodology for constructing an architecture agnostic task-level power model and incorporating it into a modern operating system to build an online task-level power profiler. The profiler is implemented inside the latest Linux kernel and validated for Intel Sandy Bridge processor. It has a negligible overhead of less than 1\% hardware resource consumption. The profiler power prediction was demonstrated for various application benchmarks from SPEC to PARSEC with less than 4\% error. I also demonstrate the importance of the proposed profiler for emerging architectural techniques through use case scenarios, which include heterogeneous computing and fine grained per-core DVFS. Along with architectural enhancement in general purpose processors to improve energy efficiency, hardware accelerators like Coarse Grain reconfigurable architecture (CGRA) are gaining popularity. Unlike vector processors, which rely on data parallelism, CGRA can provide greater flexibility and compiler level control making it more suitable for present SoC environment. To provide streamline development environment for CGRA, I propose a flexible framework in Linux to do design space exploration for CGRA. With accurate and flexible hardware models, fine grained integration with accurate architectural simulator, and Linux memory management and DMA support, a user can carry out limitless experiments on CGRA in full system environment.
ContributorsDesai, Digant Pareshkumar (Author) / Vrudhula, Sarma (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
Description
Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need

Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need for dynamic energy management (DEM), much more than for single-core processors, as DEM for multi-cores is no more a mechanism just to ensure that a processor is kept under specified temperature limits, but also a set of techniques that manage various processor controls like dynamic voltage and frequency scaling (DVFS), task migration, fan speed, etc. to achieve a stated objective. The objectives span a wide range from maximizing throughput, minimizing power consumption, reducing peak temperature, maximizing energy efficiency, maximizing processor reliability, and so on, along with much more wider constraints of temperature, power, timing, and reliability constraints. Thus DEM can be very complex and challenging to achieve. Since often times many DEMs operate together on a single processor, there is a need to unify various DEM techniques. This dissertation address such a need. In this work, a framework for DEM is proposed that provides a unifying processor model that includes processor power, thermal, timing, and reliability models, supports various DEM control mechanisms, many different objective functions along with equally diverse constraint specifications. Using the framework, a range of novel solutions is derived for instances of DEM problems, that include maximizing processor performance, energy efficiency, or minimizing power consumption, peak temperature under constraints of maximum temperature, memory reliability and task deadlines. Finally, a robust closed-loop controller to implement the above solutions on a real processor platform with a very low operational overhead is proposed. Along with the controller design, a model identification methodology for obtaining the required power and thermal models for the controller is also discussed. The controller is architecture independent and hence easily portable across many platforms. The controller has been successfully deployed on Intel Sandy Bridge processor and the use of the controller has increased the energy efficiency of the processor by over 30%
ContributorsHanumaiah, Vinay (Author) / Vrudhula, Sarma (Thesis advisor) / Chatha, Karamvir (Committee member) / Chakrabarti, Chaitali (Committee member) / Rodriguez, Armando (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151380-Thumbnail Image.png
Description
Microbial mat communities that inhabit hot springs in Yellowstone National Park have been studied for their biodiversity, energetics and evolutionary history, yet little is know about how these communities cope with nutrient limitation. In the present study the changes in assimilatory gene expression levels for nitrogen (nrgA), phosphorus (phoA), and

Microbial mat communities that inhabit hot springs in Yellowstone National Park have been studied for their biodiversity, energetics and evolutionary history, yet little is know about how these communities cope with nutrient limitation. In the present study the changes in assimilatory gene expression levels for nitrogen (nrgA), phosphorus (phoA), and iron (yusV) were measured under various nutrient enrichment experiments. While results for nrgA and phoA were inconclusive, results for yusV showed an increase in expression with the addition of N and Fe. This is the first data that shows the impact of nutrients on siderophore uptake regulation in hot spring microbes.
ContributorsThorne, Michele (Author) / Elser, James J (Thesis advisor) / Touchman, Jeffrey (Committee member) / Stout, Valerie (Committee member) / Arizona State University (Publisher)
Created2012
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
152583-Thumbnail Image.png
Description
The prevalence of antibiotic resistant bacterial pathogens has increased since the introduction of penicillin in the 1940s. Insufficient development of novel antibacterial agents is leaving us with a failing arsenal of therapies to combat these pathogenic organisms. We have identified a clay mineral mixture (designated CB) that exhibits in vitro

The prevalence of antibiotic resistant bacterial pathogens has increased since the introduction of penicillin in the 1940s. Insufficient development of novel antibacterial agents is leaving us with a failing arsenal of therapies to combat these pathogenic organisms. We have identified a clay mineral mixture (designated CB) that exhibits in vitro antibacterial activity against a broad spectrum of bacterial pathogens, yet the antibacterial mechanism of action remains unknown. Antibacterial susceptibility testing of four different clay samples collected from the same source revealed that these natural clays had markedly different antibacterial activity. X-ray diffraction analyses of these minerals revealed minor mineralogical differences across the samples; however, ICP analyses demonstrated that the concentrations of many elements, Fe, Co, Cu, Ni, and Zn in particular, vary greatly across the four clay mixture leachates. Supplementation of a non-antibacterial leachate containing lower concentrations of Fe, Co, Ni, Cu, and Zn to final ion concentrations and a pH equivalent to that of the antibacterial leachate resulted in antibacterial activity against E. coli and MRSA, confirming the role of these ions in the in vitro antibacterial clay mixture leachates. The prevailing hypothesis is that metal ions participate in redox cycling and produce ROS, leading to oxidative damage to macromolecules and resulting in cellular death. However, E. coli cells showed no increase in DNA or protein oxidative lesions and a slight increase in lipid peroxidation following exposure to CB-L. Supplementation of CB-L with ROS scavengers eliminated oxidative damage in E. coli, but did not rescue the cells from killing, indicating that in vitro killing is due to direct metal toxicity and not to indirect oxidative damage. Finally, we ion-exchanged non-antibacterial clays with Fe, Co, Cu, and Zn and established antibacterial activity in these samples. Treatment of MRSA skin infections with both natural and ion-exchanged clays significantly decreased the bacterial load after 7 days of treatment. We conclude that 1) in vitro clay-mediated killing is due to toxicity associated directly with released metal ions and not to indirect oxidative damage and 2) that in vivo killing is due to the physical properties of the clays rather than metal ion toxicity.
ContributorsOtto, Caitin Carol (Author) / Haydel, Shelley (Thesis advisor) / Stout, Valerie (Committee member) / Roberson, Robby (Committee member) / Sandrin, Todd (Committee member) / Rege, Kaushal (Committee member) / Arizona State University (Publisher)
Created2014
152307-Thumbnail Image.png
Description
Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.
ContributorsMalin, Anna (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2013
152173-Thumbnail Image.png
Description
Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded

Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded multi-core processors. The thesis addresses the problem in the context of a multi-tasking environment with a time varying allocation of processing elements for a particular streaming application. As a solution the thesis proposes a two step approach where the stream program is compiled to gather key application information, and to generate re-targetable code. A light weight dynamic scheduler incorporates the second stage of the approach. The dynamic scheduler utilizes the static information and available resources to assign or partition the application across the multi-core architecture. The objective of the dynamic scheduler is to maximize the throughput of the application, and it is sensitive to the resource (processing elements, scratch-pad memory, DMA bandwidth) constraints imposed by the target architecture. We evaluate the proposed approach by compiling and scheduling benchmark stream programs on a representative embedded multi-core processor. We present experimental results that evaluate the quality of the solutions generated by the proposed approach by comparisons with existing techniques.
ContributorsLee, Haeseung (Author) / Chatha, Karamvir (Thesis advisor) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013