Matching Items (69)
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151860-Thumbnail Image.png
Description
Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.
ContributorsTaylor, David (Author) / Rege, Kaushal (Thesis advisor) / Jayaraman, Arul (Committee member) / Nielsen, David (Committee member) / Kodibagkar, Vikram (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2013
152216-Thumbnail Image.png
Description
This dissertation presents a novel current source converter topology that is primarily intended for single-phase photovoltaic (PV) applications. In comparison with the existing PV inverter technology, the salient features of the proposed topology are: a) the low frequency (double of line frequency) ripple that is common to single-phase inverters is

This dissertation presents a novel current source converter topology that is primarily intended for single-phase photovoltaic (PV) applications. In comparison with the existing PV inverter technology, the salient features of the proposed topology are: a) the low frequency (double of line frequency) ripple that is common to single-phase inverters is greatly reduced; b) the absence of low frequency ripple enables significantly reduced size pass components to achieve necessary DC-link stiffness and c) improved maximum power point tracking (MPPT) performance is readily achieved due to the tightened current ripple even with reduced-size passive components. The proposed topology does not utilize any electrolytic capacitors. Instead an inductor is used as the DC-link filter and reliable AC film capacitors are utilized for the filter and auxiliary capacitor. The proposed topology has a life expectancy on par with PV panels. The proposed modulation technique can be used for any current source inverter where an unbalanced three-phase operation is desires such as active filters and power controllers. The proposed topology is ready for the next phase of microgrid and power system controllers in that it accepts reactive power commands. This work presents the proposed topology and its working principle supported by with numerical verifications and hardware results. Conclusions and future work are also presented.
ContributorsBush, Craig R (Author) / Ayyanar, Raja (Thesis advisor) / Karam, Lina (Committee member) / Heydt, Gerald (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151433-Thumbnail Image.png
Description
Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1).

Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1). I first developed a simplified technique to incorporate iron oxides in apoferritin to form "magnetoferritin" for nM-level detection with T2- and T2* weighting. I then explored whether the crystal could be chemically modified to form a particle with high r1. I first adsorbed Mn2+ ions to metal binding sites in the apoferritin pores. The strategic placement of metal ions near sites of water exchange and within the crystal oxide enhance r1, suggesting a mechanism for increasing relaxivity in porous nanoparticle agents. However, the Mn2+ addition was only possible when the particle was simultaneously filled with an iron oxide, resulting in a particle with a high r1 but also a high r2 and making them undetectable with conventional T1-weighting techniques. To solve this problem and decrease the particle r2 for more sensitive detection, I chemically doped the nanoparticles with tungsten to form a disordered W-Fe oxide composite in the apoferritin core. This configuration formed a particle with a r1 of 4,870mM-1s-1 and r2 of 9,076mM-1s-1. These relaxivities allowed the detection of concentrations ranging from 20nM - 400nM in vivo, both passively injected and targeted to the kidney glomerulus. I further developed an MRI acquisition technique to distinguish particles based on r2/r1, and show that three nanoparticles of similar size can be distinguished in vitro and in vivo with MRI. This work forms the basis for a new, highly flexible inorganic approach to design nanoparticle contrast agents for molecular MRI.
ContributorsClavijo Jordan, Maria Veronica (Author) / Bennett, Kevin M (Thesis advisor) / Kodibagkar, Vikram (Committee member) / Sherry, A Dean (Committee member) / Wang, Xiao (Committee member) / Yarger, Jeffery (Committee member) / Arizona State University (Publisher)
Created2012
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151446-Thumbnail Image.png
Description
Present day Internet Protocol (IP) based video transport and dissemination systems are heterogeneous in that they differ in network bandwidth, display resolutions and processing capabilities. One important objective in such an environment is the flexible adaptation of once-encoded content and to achieve this, one popular method is the scalable video

Present day Internet Protocol (IP) based video transport and dissemination systems are heterogeneous in that they differ in network bandwidth, display resolutions and processing capabilities. One important objective in such an environment is the flexible adaptation of once-encoded content and to achieve this, one popular method is the scalable video coding (SVC) technique. The SVC extension of the H.264/AVC standard has higher compression efficiency when compared to the previous scalable video standards. The network transport of 3D video, which is obtained by superimposing two views of a video scene, poses significant challenges due to the increased video data compared to conventional single-view video. Addressing these challenges requires a thorough understanding of the traffic and multiplexing characteristics of the different representation formats of 3D video. In this study, H.264 quality scalability and multiview representation formats are examined. As H.264/AVC, it's SVC and multiview extensions are expected to become widely adopted for the network transport of video, it is important to thoroughly study their network traffic characteristics, including the bit rate variability. Primarily the focus is on the SVC amendment of the H.264/AVC standard, with particular focus on Coarse-Grain Scalability (CGS) and Medium-Grain Scalability (MGS). In this study, we report on a large-scale study of the rate-distortion (RD) and rate variability-distortion (VD) characteristics of CGS and MGS. We also examine the RD and VD characteristics of three main multiview (3D) representation formats. Specifically, we compare multiview video (MV) representation and encoding, frame sequential (FS) representation, and side-by-side (SBS) representation; whereby conventional single-view encoding is employed for the FS and SBS representations. As a last step, we also examine Video traffic modeling which plays a major part in network traffic analysis. It is imperative to network design and simulation, providing Quality of Service (QoS) to network applications, besides providing insights into the coding process and structure of video sequences. We propose our models on top of the recent unified traffic model developed by Dai et al. [1], for modeling MPEG-4 and H.264 VBR video traffic. We exploit the hierarchical predication structure inherent in H.264 for intra-GoP (group of pictures) analysis.
ContributorsPulipaka, Venkata Sai Akshay (Author) / Reisslein, Martin (Thesis advisor) / Karam, Lina (Thesis advisor) / Li, Baoxin (Committee member) / Seeling, Patrick (Committee member) / Arizona State University (Publisher)
Created2012
151453-Thumbnail Image.png
Description
Ionizing radiation, such as gamma rays and X-rays, are becoming more widely used. These high-energy forms of electromagnetic radiation are present in nuclear energy, astrophysics, and the medical field. As more and more people have the opportunity to be exposed to ionizing radiation, the necessity for coming up with simple

Ionizing radiation, such as gamma rays and X-rays, are becoming more widely used. These high-energy forms of electromagnetic radiation are present in nuclear energy, astrophysics, and the medical field. As more and more people have the opportunity to be exposed to ionizing radiation, the necessity for coming up with simple and quick methods of radiation detection is increasing. In this work, two systems were explored for their ability to simply detect ionizing radiation. Gold nanoparticles were formed via radiolysis of water in the presence of Elastin-like polypeptides (ELPs) and also in the presence of cationic polymers. Gold nanoparticle formation is an indicator of the presence of radiation. The system with ELP was split into two subsystems: those samples including isopropyl alcohol (IPA) and acetone, and those without IPA and acetone. The samples were exposed to certain radiation doses and gold nanoparticles were formed. Gold nanoparticle formation was deemed to have occurred when the sample changed color from light yellow to a red or purple color. Nanoparticle formation was also checked by absorbance measurements. In the cationic polymer system, gold nanoparticles were also formed after exposing the experimental system to certain radiation doses. Unique to the polymer system was the ability of some of the cationic polymers to form gold nanoparticles without the samples being irradiated. Future work to be done on this project is further characterization of the gold nanoparticles formed by both systems.
ContributorsWalker, Candace (Author) / Rege, Kaushal (Thesis advisor) / Chang, John (Committee member) / Kodibagkar, Vikram (Committee member) / Potta, Thrimoorthy (Committee member) / Arizona State University (Publisher)
Created2012