This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 225
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013
152110-Thumbnail Image.png
Description
In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used.

In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used. If that is the case, the soil is compacted to in-situ density and water content (or matric suction), which should best represent the expansive profile in question. It is standard practice to subject the specimen to a wetting process at a particular net normal stress. Even though currently accepted laboratory testing standard procedures provide insight on how the profile conditions changes with time, these procedures do not assess the long term effects on the soil due to climatic changes. In this experimental study, an assessment and quantification of the effect of multiple wetting/drying cycles on the volume change behavior of two different naturally occurring soils was performed. The changes in wetting and drying cycles were extreme when comparing the swings in matric suction. During the drying cycle, the expansive soil was subjected to extreme conditions, which decreased the moisture content less than the shrinkage limit. Nevertheless, both soils were remolded at five different compacted conditions and loaded to five different net normal stresses. Each sample was subjected to six wetting and drying cycles. During the assessment, it was evident from the results that the swell/collapse strain is highly non-linear at low stress levels. The strain-net normal stress relationship cannot be defined by one single function without transforming the data. Therefore, the dataset needs to be fitted to a bi-modal logarithmic function or to a logarithmic transformation of net normal stress in order to use a third order polynomial fit. It was also determined that the moisture content changes with time are best fit by non-linear functions. For the drying cycle, the radial strain was determined to have a constant rate of change with respect to the axial strain. However, for the wetting cycle, there was not enough radial strain data to develop correlations and therefore, an assumption was made based on 55 different test measurements/observations, for the wetting cycles. In general, it was observed that after each subsequent cycle, higher swelling was exhibited for lower net normal stress values; while higher collapse potential was observed for higher net normal stress values, once the net normal stress was less than/greater than a threshold net normal stress value. Furthermore, the swelling pressure underwent a reduction in all cases. Particularly, the Anthem soil exhibited a reduction in swelling pressure by at least 20 percent after the first wetting/drying cycle; while Colorado soil exhibited a reduction of 50 percent. After about the fourth cycle, the swelling pressure seemed to stabilized to an equilibrium value at which a reduction of 46 percent was observed for the Anthem soil and 68 percent reduction for the Colorado soil. The impact of the initial compacted conditions on heave characteristics was studied. Results indicated that materials compacted at higher densities exhibited greater swell potential. When comparing specimens compacted at the same density but at different moisture content (matric suction), it was observed that specimens compacted at higher suction would exhibit higher swelling potential, when subjected to the same net normal stress. The least amount of swelling strain was observed on specimens compacted at the lowest dry density and the lowest matric suction (higher water content). The results from the laboratory testing were used to develop ultimate heave profiles for both soils. This analysis showed that even though the swell pressure for each soil decreased with cycles, the amount of heave would increase or decrease depending upon the initial compaction condition. When the specimen was compacted at 110% of optimum moisture content and 90% of maximum dry density, it resulted in an ultimate heave reduction of 92 percent for Anthem and 685 percent for Colorado soil. On the other hand, when the soils were compacted at 90% optimum moisture content and 100% of the maximum dry density, Anthem specimens heave 78% more and Colorado specimens heave was reduced by 69%. Based on the results obtained, it is evident that the current methods to estimate heave and swelling pressure do not consider the effect of wetting/drying cycles; and seem to fail capturing the free swell potential of the soil. Recommendations for improvement current methods of practice are provided.
ContributorsRosenbalm, Daniel Curtis (Author) / Zapata, Claudia E (Thesis advisor) / Houston, Sandra L. (Committee member) / Kavazanjian, Edward (Committee member) / Witczak, Mathew W (Committee member) / Arizona State University (Publisher)
Created2013
152073-Thumbnail Image.png
Description
The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation

The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation and by sedimentation through water. Pluviated soil deposits were liquefied in the small geotechnical centrifuge at the University of California at Davis shared-use National Science Foundation (NSF)-supported Network for Earthquake Engineering Simulation (NEES) facility. A soil deposit created by sedimentation through water was liquefied on a small shake table in the Arizona State University geotechnical laboratory. Initial centrifuge tests employed Ottawa 20-30 sand but this material proved to be too coarse to liquefy in the centrifuge. Therefore, subsequent centrifuge tests employed Ottawa F60 sand. The shake table test employed Ottawa 20-30 sand. Recovered cores were stabilized by impregnation with optical grade epoxy and sent to the University of Texas at Austin NSF-supported facility at the University of Texas at Austin for high-resolution CT scanning of geologic media. The local void ratio distribution of a CT-scanned core of Ottawa 20-30 sand evaluated using Avizo® Fire, a commercially available advanced program for image analysis, was compared to the local void ratio distribution established on the same core by analysis of optical images to demonstrate that analysis of the CT scans gave similar results to optical methods. CT scans were subsequently conducted on liquefied and not-liquefied specimens of Ottawa 20-30 sand and Ottawa F60 sand. The resolution of F60 specimens was inadequate to establish the local void ratio distribution. Results of the analysis of the Ottawa 20-30 specimens recovered from the model built for the shake table test showed that liquefaction can substantially influence the variability in local void ratio, increasing the degree of non-homogeneity in the specimen.
ContributorsGutierrez, Angel (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151941-Thumbnail Image.png
Description
With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate

With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate software developers to leverage these hardware techniques and improve energy efficiency of the system. To achieve this, I propose two solutions for Linux kernel: Optimal use of these architectural enhancements to achieve greater energy efficiency requires accurate modeling of processor power consumption. Though there are many models available in literature to model processor power consumption, there is a lack of such models to capture power consumption at the task-level. Task-level energy models are a requirement for an operating system (OS) to perform real-time power management as OS time multiplexes tasks to enable sharing of hardware resources. I propose a detailed design methodology for constructing an architecture agnostic task-level power model and incorporating it into a modern operating system to build an online task-level power profiler. The profiler is implemented inside the latest Linux kernel and validated for Intel Sandy Bridge processor. It has a negligible overhead of less than 1\% hardware resource consumption. The profiler power prediction was demonstrated for various application benchmarks from SPEC to PARSEC with less than 4\% error. I also demonstrate the importance of the proposed profiler for emerging architectural techniques through use case scenarios, which include heterogeneous computing and fine grained per-core DVFS. Along with architectural enhancement in general purpose processors to improve energy efficiency, hardware accelerators like Coarse Grain reconfigurable architecture (CGRA) are gaining popularity. Unlike vector processors, which rely on data parallelism, CGRA can provide greater flexibility and compiler level control making it more suitable for present SoC environment. To provide streamline development environment for CGRA, I propose a flexible framework in Linux to do design space exploration for CGRA. With accurate and flexible hardware models, fine grained integration with accurate architectural simulator, and Linux memory management and DMA support, a user can carry out limitless experiments on CGRA in full system environment.
ContributorsDesai, Digant Pareshkumar (Author) / Vrudhula, Sarma (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
Description
Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need

Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need for dynamic energy management (DEM), much more than for single-core processors, as DEM for multi-cores is no more a mechanism just to ensure that a processor is kept under specified temperature limits, but also a set of techniques that manage various processor controls like dynamic voltage and frequency scaling (DVFS), task migration, fan speed, etc. to achieve a stated objective. The objectives span a wide range from maximizing throughput, minimizing power consumption, reducing peak temperature, maximizing energy efficiency, maximizing processor reliability, and so on, along with much more wider constraints of temperature, power, timing, and reliability constraints. Thus DEM can be very complex and challenging to achieve. Since often times many DEMs operate together on a single processor, there is a need to unify various DEM techniques. This dissertation address such a need. In this work, a framework for DEM is proposed that provides a unifying processor model that includes processor power, thermal, timing, and reliability models, supports various DEM control mechanisms, many different objective functions along with equally diverse constraint specifications. Using the framework, a range of novel solutions is derived for instances of DEM problems, that include maximizing processor performance, energy efficiency, or minimizing power consumption, peak temperature under constraints of maximum temperature, memory reliability and task deadlines. Finally, a robust closed-loop controller to implement the above solutions on a real processor platform with a very low operational overhead is proposed. Along with the controller design, a model identification methodology for obtaining the required power and thermal models for the controller is also discussed. The controller is architecture independent and hence easily portable across many platforms. The controller has been successfully deployed on Intel Sandy Bridge processor and the use of the controller has increased the energy efficiency of the processor by over 30%
ContributorsHanumaiah, Vinay (Author) / Vrudhula, Sarma (Thesis advisor) / Chatha, Karamvir (Committee member) / Chakrabarti, Chaitali (Committee member) / Rodriguez, Armando (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151835-Thumbnail Image.png
Description
Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important

Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important soil property function for application of unsaturated soil mechanics. The soil water characteristic curve has been used extensively for estimating unsaturated soil properties, and a number of fitting equations for development of soil water characteristic curves from laboratory data have been proposed by researchers. Although not always mentioned, the underlying assumption of soil water characteristic curve fitting equations is that the soil is sufficiently stiff so that there is no change in total volume of the soil while measuring the soil water characteristic curve in the laboratory, and researchers rarely take volume change of soils into account when generating or using the soil water characteristic curve. Further, there has been little attention to the applied net normal stress during laboratory soil water characteristic curve measurement, and often zero to only token net normal stress is applied. The applied net normal stress also affects the volume change of the specimen during soil suction change. When a soil changes volume in response to suction change, failure to consider the volume change of the soil leads to errors in the estimated air-entry value and the slope of the soil water characteristic curve between the air-entry value and the residual moisture state. Inaccuracies in the soil water characteristic curve may lead to inaccuracies in estimated soil property functions such as unsaturated hydraulic conductivity. A number of researchers have recently recognized the importance of considering soil volume change in soil water characteristic curves. The study of correct methods of soil water characteristic curve measurement and determination considering soil volume change, and impacts on the unsaturated hydraulic conductivity function was of the primary focus of this study. Emphasis was placed upon study of the effect of volume change consideration on soil water characteristic curves, for expansive clays and other high volume change soils. The research involved extensive literature review and laboratory soil water characteristic curve testing on expansive soils. The effect of the initial state of the specimen (i.e. slurry versus compacted) on soil water characteristic curves, with regard to volume change effects, and effect of net normal stress on volume change for determination of these curves, was studied for expansive clays. Hysteresis effects were included in laboratory measurements of soil water characteristic curves as both wetting and drying paths were used. Impacts of soil water characteristic curve volume change considerations on fluid flow computations and associated suction-change induced soil deformations were studied through numerical simulations. The study includes both coupled and uncoupled flow and stress-deformation analyses, demonstrating that the impact of volume change consideration on the soil water characteristic curve and the estimated unsaturated hydraulic conductivity function can be quite substantial for high volume change soils.
ContributorsBani Hashem, Elham (Author) / Houston, Sandra L. (Thesis advisor) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151506-Thumbnail Image.png
Description
Microbially induced calcium carbonate precipitation (MICP) is attracting increasing attention as a sustainable means of soil improvement. While there are several possible MICP mechanisms, microbial denitrification has the potential to become one of the preferred methods for MICP because complete denitrification does not produce toxic byproducts, readily occurs under anoxic

Microbially induced calcium carbonate precipitation (MICP) is attracting increasing attention as a sustainable means of soil improvement. While there are several possible MICP mechanisms, microbial denitrification has the potential to become one of the preferred methods for MICP because complete denitrification does not produce toxic byproducts, readily occurs under anoxic conditions, and potentially has a greater carbonate yield per mole of organic electron donor than other MICP processes. Denitrification may be preferable to ureolytic hydrolysis, the MICP process explored most extensively to date, as the byproduct of denitrification is benign nitrogen gas, while the chemical pathways involved in hydrolytic ureolysis processes produce undesirable and potentially toxic byproducts such as ammonium (NH4+). This thesis focuses on bacterial denitrification and presents preliminary results of bench-scale laboratory experiments on denitrification as a candidate calcium carbonate precipitation mechanism. The bench-scale bioreactor and column tests, conducted using the facultative anaerobic bacterium Pseudomonas denitrificans, show that calcite can be precipitated from calcium-rich pore water using denitrification. Experiments also explore the potential for reducing environmental impacts and lowering costs associated with denitrification by reducing the total dissolved solids in the reactors and columns, optimizing the chemical matrix, and addressing the loss of free calcium in the form of calcium phosphate precipitate from the pore fluid. The potential for using MICP to sequester radionuclides and metal contaminants that are migrating in groundwater is also investigated. In the sequestration process, divalent cations and radionuclides are incorporated into the calcite structure via substitution, forming low-strontium calcium carbonate minerals that resist dissolution at a level similar to that of calcite. Work by others using the bacterium Sporosarcina pasteurii has suggested that in-situ sequestration of radionuclides and metal contaminants can be achieved through MICP via hydrolytic ureolysis. MICP through bacterial denitrification seems particularly promising as a means for sequestering radionuclides and metal contaminants in anoxic environments due to the anaerobic nature of the process and the ubiquity of denitrifying bacteria in the subsurface.
ContributorsHamdan, Nasser (Author) / Kavazanjian, Edward (Thesis advisor) / Rittmann, Bruce E. (Thesis advisor) / Shock, Everett (Committee member) / Arizona State University (Publisher)
Created2013