Matching Items (138)
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151857-Thumbnail Image.png
Description
Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo

Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo magnetic resonance imaging (MRI) of local drug delivery is performed to visualize and quantify the time resolved distribution of MRI contrast agents. I find it is possible to visualize contrast agent distributions in near real time from local delivery vehicles using MRI. Three dimensional T1 maps are processed to produce in vivo concentration maps of contrast agent for individual animal models. The method for obtaining concentration maps is analyzed to estimate errors introduced at various steps in the process. The method is used to evaluate different controlled release vehicles, vehicle placement, and type of surgical wound in rabbits as a model for antimicrobial delivery to orthopaedic infection sites. I are able to see differences between all these factors; however, all images show that contrast agent remains fairly local to the wound site and do not distribute to tissues far from the implant in therapeutic concentrations. I also produce a mathematical model that investigates important mechanisms in the transport of antimicrobials in a wound environment. It is determined from both the images and the mathematical model that antimicrobial distribution in an orthopaedic wounds is dependent on both diffusive and convective mechanisms. Furthermore, I began development of MRI visible therapeutic agents to examine active drug distributions. I hypothesize that this work can be developed into a non-invasive, patient specific, clinical tool to evaluate the success of interventional procedures using local drug delivery vehicles.
ContributorsGiers, Morgan (Author) / Caplan, Michael R (Thesis advisor) / Massia, Stephen P (Committee member) / Frakes, David (Committee member) / McLaren, Alex C. (Committee member) / Vernon, Brent L (Committee member) / Arizona State University (Publisher)
Created2013
152162-Thumbnail Image.png
Description
Stable isotopes were measured in the groundwaters of the Salt River Valley basin in central Arizona to explore the utility of stable isotopes for sourcing recharge waters and engineering better well designs. Delta values for the sampled groundwaters range from -7.6‰ to -10‰ in 18O and -60‰ to -91‰ in

Stable isotopes were measured in the groundwaters of the Salt River Valley basin in central Arizona to explore the utility of stable isotopes for sourcing recharge waters and engineering better well designs. Delta values for the sampled groundwaters range from -7.6‰ to -10‰ in 18O and -60‰ to -91‰ in D and display displacements off the global meteoric water line indicative of surficial evaporation during river transport into the area. Groundwater in the basin is all derived from top-down river recharge; there is no evidence of ancient playa waters even in the playa deposits. The Salt and Verde Rivers are the dominant source of groundwater for the East Salt River valley- the Agua Fria River also contributes significantly to the West Salt River Valley. Groundwater isotopic compositions are generally more depleted in 18O and D with depth, indicating past recharge in cooler climates, and vary within subsurface aquifer layers as sampled during well drilling. When isotopic data were evaluated together with geologic and chemical analyses and compared with data from the final well production water it was often possible to identify: 1) which horizons are the primary producers of groundwater flow and how that might change with time, 2) the chemical exchange of cations and anions via water-rock interaction during top-down mixing of recharge water with older waters, 3) how much well production might be lost if arsenic-contributing horizons were sealed off, and 4) the extent to which replacement wells tap different subsurface water sources. In addition to identifying sources of recharge, stable isotopes offer a new and powerful approach for engineering better and more productive water wells.
ContributorsBond, Angela Nicole (Author) / Knauth, Paul (Thesis advisor) / Hartnett, Hilairy (Committee member) / Shock, Everett (Committee member) / Arizona State University (Publisher)
Created2010
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151306-Thumbnail Image.png
Description
Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of

Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of microscale sensors that are used for sensing applications, ranging from whole-body monitoring down to molecular sensing. Unfortunately, sensors are often developed without regard to how they will be integrated into biological systems. The complexities of integration are underappreciated. Integration involves more than simply making electrical connections. Interfacing microscale sensors with biological environments requires numerous considerations with respect to the creation of compatible packaging, the management of biological reagents, and the act of combining technologies with different dimensions and material properties. Recent advances in microfluidics, especially the proliferation of soft lithography manufacturing methods, have established the groundwork for creating systems that may solve many of the problems inherent to sensor-fluidic interaction. The adaptation of microelectronics manufacturing methods, such as Complementary Metal-Oxide-Semiconductor (CMOS) and Microelectromechanical Systems (MEMS) processes, allows the creation of a complete biological sensing system with integrated sensors and readout circuits. Combining these technologies is an obstacle to forming complete sensor systems. This dissertation presents new approaches for the design, fabrication, and integration of microscale sensors and microelectronics with microfluidics. The work addresses specific challenges, such as combining commercial manufacturing processes into biological systems and developing microscale sensors in these processes. This work is exemplified through a feedback-controlled microfluidic pH system to demonstrate the integration capabilities of microscale sensors for autonomous microenvironment control.
ContributorsWelch, David (Author) / Blain Christen, Jennifer (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Frakes, David (Committee member) / LaBelle, Jeffrey (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2012
151506-Thumbnail Image.png
Description
Microbially induced calcium carbonate precipitation (MICP) is attracting increasing attention as a sustainable means of soil improvement. While there are several possible MICP mechanisms, microbial denitrification has the potential to become one of the preferred methods for MICP because complete denitrification does not produce toxic byproducts, readily occurs under anoxic

Microbially induced calcium carbonate precipitation (MICP) is attracting increasing attention as a sustainable means of soil improvement. While there are several possible MICP mechanisms, microbial denitrification has the potential to become one of the preferred methods for MICP because complete denitrification does not produce toxic byproducts, readily occurs under anoxic conditions, and potentially has a greater carbonate yield per mole of organic electron donor than other MICP processes. Denitrification may be preferable to ureolytic hydrolysis, the MICP process explored most extensively to date, as the byproduct of denitrification is benign nitrogen gas, while the chemical pathways involved in hydrolytic ureolysis processes produce undesirable and potentially toxic byproducts such as ammonium (NH4+). This thesis focuses on bacterial denitrification and presents preliminary results of bench-scale laboratory experiments on denitrification as a candidate calcium carbonate precipitation mechanism. The bench-scale bioreactor and column tests, conducted using the facultative anaerobic bacterium Pseudomonas denitrificans, show that calcite can be precipitated from calcium-rich pore water using denitrification. Experiments also explore the potential for reducing environmental impacts and lowering costs associated with denitrification by reducing the total dissolved solids in the reactors and columns, optimizing the chemical matrix, and addressing the loss of free calcium in the form of calcium phosphate precipitate from the pore fluid. The potential for using MICP to sequester radionuclides and metal contaminants that are migrating in groundwater is also investigated. In the sequestration process, divalent cations and radionuclides are incorporated into the calcite structure via substitution, forming low-strontium calcium carbonate minerals that resist dissolution at a level similar to that of calcite. Work by others using the bacterium Sporosarcina pasteurii has suggested that in-situ sequestration of radionuclides and metal contaminants can be achieved through MICP via hydrolytic ureolysis. MICP through bacterial denitrification seems particularly promising as a means for sequestering radionuclides and metal contaminants in anoxic environments due to the anaerobic nature of the process and the ubiquity of denitrifying bacteria in the subsurface.
ContributorsHamdan, Nasser (Author) / Kavazanjian, Edward (Thesis advisor) / Rittmann, Bruce E. (Thesis advisor) / Shock, Everett (Committee member) / Arizona State University (Publisher)
Created2013
152063-Thumbnail Image.png
Description
A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they rupture. Endovascular coiling is the most effective treatment for cerebral

A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they rupture. Endovascular coiling is the most effective treatment for cerebral aneurysms. During coiling process, series of metallic coils are deployed into the aneurysmal sack with the intent of reaching a sufficient packing density (PD). Coils packing can facilitate thrombus formation and help seal off the aneurysm from circulation over time. While coiling is effective, high rates of treatment failure have been associated with basilar tip aneurysms (BTAs). Treatment failure may be related to geometrical features of the aneurysm. The purpose of this study was to investigate the influence of dome size, parent vessel (PV) angle, and PD on post-treatment aneurysmal hemodynamics using both computational fluid dynamics (CFD) and particle image velocimetry (PIV). Flows in four idealized BTA models with a combination of dome sizes and two different PV angles were simulated using CFD and then validated against PIV data. Percent reductions in post-treatment aneurysmal velocity and cross-neck (CN) flow as well as percent coverage of low wall shear stress (WSS) area were analyzed. In all models, aneurysmal velocity and CN flow decreased after coiling, while low WSS area increased. However, with increasing PD, further reductions were observed in aneurysmal velocity and CN flow, but minimal changes were observed in low WSS area. Overall, coil PD had the greatest impact while dome size has greater impact than PV angle on aneurysmal hemodynamics. These findings lead to a conclusion that combinations of treatment goals and geometric factor may play key roles in coil embolization treatment outcomes, and support that different treatment timing may be a critical factor in treatment optimization.
ContributorsIndahlastari, Aprinda (Author) / Frakes, David (Thesis advisor) / Chong, Brian (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012