Matching Items (63)
149904-Thumbnail Image.png
Description
Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all

Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all until recently. Different materials may have the same CT number, which may lead to uncertainty or misdiagnosis. Dual-energy CT (DECT) was reintroduced recently to solve this problem by using the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between two low and high energy images or measurements, so that it is difficult to acquire the accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, a new model and an image enhancement technique for DECT are proposed, based on the fact that the attenuation of a high density material decreases more rapidly as X-ray energy increases. This fact has been previously ignored in most of DECT image enhancement techniques. The proposed technique consists of offset correction, spectral error correction, and adaptive noise suppression. It reduced noise, improved contrast effectively and showed better material differentiation in real patient images as well as phantom studies.
ContributorsPark, Kyung Kook (Author) / Metin, Akay (Thesis advisor) / Pavlicek, William (Committee member) / Akay, Yasemin (Committee member) / Towe, Bruce (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2011
150069-Thumbnail Image.png
Description
Phase contrast magnetic resonance angiography (PCMRA) is a non-invasive imaging modality that is capable of producing quantitative vascular flow velocity information. The encoding of velocity information can significantly increase the imaging acquisition and reconstruction durations associated with this technique. The purpose of this work is to provide mechanisms for reducing

Phase contrast magnetic resonance angiography (PCMRA) is a non-invasive imaging modality that is capable of producing quantitative vascular flow velocity information. The encoding of velocity information can significantly increase the imaging acquisition and reconstruction durations associated with this technique. The purpose of this work is to provide mechanisms for reducing the scan time of a 3D phase contrast exam, so that hemodynamic velocity data may be acquired robustly and with a high sensitivity. The methods developed in this work focus on the reduction of scan duration and reconstruction computation of a neurovascular PCMRA exam. The reductions in scan duration are made through a combination of advances in imaging and velocity encoding methods. The imaging improvements are explored using rapid 3D imaging techniques such as spiral projection imaging (SPI), Fermat looped orthogonally encoded trajectories (FLORET), stack of spirals and stack of cones trajectories. Scan durations are also shortened through the use and development of a novel parallel imaging technique called Pretty Easy Parallel Imaging (PEPI). Improvements in the computational efficiency of PEPI and in general MRI reconstruction are made in the area of sample density estimation and correction of 3D trajectories. A new method of velocity encoding is demonstrated to provide more efficient signal to noise ratio (SNR) gains than current state of the art methods. The proposed velocity encoding achieves improved SNR through the use of high gradient moments and by resolving phase aliasing through the use measurement geometry and non-linear constraints.
ContributorsZwart, Nicholas R (Author) / Frakes, David H (Thesis advisor) / Pipe, James G (Thesis advisor) / Bennett, Kevin M (Committee member) / Debbins, Josef P (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2011
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151852-Thumbnail Image.png
Description
Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome.

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.
ContributorsYamak, Didem (Author) / Akay, Metin (Thesis advisor) / Muthuswamy, Jit (Committee member) / Akay, Yasemin (Committee member) / Pavlicek, William (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
151860-Thumbnail Image.png
Description
Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.
ContributorsTaylor, David (Author) / Rege, Kaushal (Thesis advisor) / Jayaraman, Arul (Committee member) / Nielsen, David (Committee member) / Kodibagkar, Vikram (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2013
152160-Thumbnail Image.png
Description
A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an

A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an effective treatment option for cerebral aneurysms that is far less invasive than conventional surgical options. Nonetheless, the rate of successful treatment is as low as 50% for certain types of aneurysms. Treatment success has been correlated with favorable post-treatment hemodynamics. However, current understanding of the effects of endovascular treatment parameters on post-treatment hemodynamics is limited. This limitation is due in part to current challenges in in vivo flow measurement techniques. Improved understanding of post-treatment hemodynamics can lead to more effective treatments. However, the effects of treatment on hemodynamics may be patient-specific and thus, accurate tools that can predict hemodynamics on a case by case basis are also required for improving outcomes.Accordingly, the main objectives of this work were 1) to develop computational tools for predicting post-treatment hemodynamics and 2) to build a foundation of understanding on the effects of controllable treatment parameters on cerebral aneurysm hemodynamics. Experimental flow measurement techniques, using particle image velocimetry, were first developed for acquiring flow data in cerebral aneurysm models treated with an endovascular device. The experimental data were then used to guide the development of novel computational tools, which consider the physical properties, design specifications, and deployment mechanics of endovascular devices to simulate post-treatment hemodynamics. The effects of different endovascular treatment parameters on cerebral aneurysm hemodynamics were then characterized under controlled conditions. Lastly, application of the computational tools for interventional planning was demonstrated through the evaluation of two patient cases.
ContributorsBabiker, M. Haithem (Author) / Frakes, David H (Thesis advisor) / Adrian, Ronald (Committee member) / Caplan, Michael (Committee member) / Chong, Brian (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151433-Thumbnail Image.png
Description
Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1).

Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1). I first developed a simplified technique to incorporate iron oxides in apoferritin to form "magnetoferritin" for nM-level detection with T2- and T2* weighting. I then explored whether the crystal could be chemically modified to form a particle with high r1. I first adsorbed Mn2+ ions to metal binding sites in the apoferritin pores. The strategic placement of metal ions near sites of water exchange and within the crystal oxide enhance r1, suggesting a mechanism for increasing relaxivity in porous nanoparticle agents. However, the Mn2+ addition was only possible when the particle was simultaneously filled with an iron oxide, resulting in a particle with a high r1 but also a high r2 and making them undetectable with conventional T1-weighting techniques. To solve this problem and decrease the particle r2 for more sensitive detection, I chemically doped the nanoparticles with tungsten to form a disordered W-Fe oxide composite in the apoferritin core. This configuration formed a particle with a r1 of 4,870mM-1s-1 and r2 of 9,076mM-1s-1. These relaxivities allowed the detection of concentrations ranging from 20nM - 400nM in vivo, both passively injected and targeted to the kidney glomerulus. I further developed an MRI acquisition technique to distinguish particles based on r2/r1, and show that three nanoparticles of similar size can be distinguished in vitro and in vivo with MRI. This work forms the basis for a new, highly flexible inorganic approach to design nanoparticle contrast agents for molecular MRI.
ContributorsClavijo Jordan, Maria Veronica (Author) / Bennett, Kevin M (Thesis advisor) / Kodibagkar, Vikram (Committee member) / Sherry, A Dean (Committee member) / Wang, Xiao (Committee member) / Yarger, Jeffery (Committee member) / Arizona State University (Publisher)
Created2012