Matching Items (111)
Filtering by

Clear all filters

150080-Thumbnail Image.png
Description
Treatment of cerebral aneurysms using non-invasive methods has existed for decades. Since the advent of modern endovascular techniques, advancements to embolic materials have largely focused on improving platinum coil technology. However, the recent development of Onyx®, a liquid-delivery precipitating polymer system, has opened the door for a new class of

Treatment of cerebral aneurysms using non-invasive methods has existed for decades. Since the advent of modern endovascular techniques, advancements to embolic materials have largely focused on improving platinum coil technology. However, the recent development of Onyx®, a liquid-delivery precipitating polymer system, has opened the door for a new class of embolic materials--liquid-fill systems. These liquid-fill materials have the potential to provide better treatment outcomes than platinum coils. Initial clinical use of Onyx has proven promising, but not without substantial drawbacks, such as co-delivery of angiotoxic compounds and an extremely technical delivery procedure. This work focuses on formulation, characterization and testing of a novel liquid-to-solid gelling polymer system, based on poly(propylene glycol) diacrylate (PPODA) and pentaerythritol tetrakis(3-mercaptopropionate) (QT). The PPODA-QT system bypasses difficulties associated with Onyx embolization, yet still maintains non-invasive liquid delivery--exhibiting the properties of an ideal embolic material for cerebral aneurysm embolization. To allow for material visibility during clinical delivery, an embolic material must be radio-opaque. The PPODA-QT system was formulated with commercially available contrast agents and the gelling kinetics were studied, as a complete understanding of the gelling process is vital for clinical use. These PPODA-QT formulations underwent in vitro characterization of material properties including cytotoxicity, swelling, and degradation behaviors. Formulation and characterization tests led to an optimized PPODA-QT formulation that was used in subsequent in vivo testing. PPODA-QT formulated with the liquid contrast agent ConrayTM was used in the first in vivo studies. These studies employed a swine aneurysm model to assess initial biocompatibility and test different delivery strategies of PPODA-QT. Results showed good biocompatibility and a suitable delivery strategy, providing justification for further in vivo testing. PPODA-QT was then used in a small scale pilot study to gauge long-term effectiveness of the material in a clinically-relevant aneurysm model. Results from the pilot study showed that PPODA-QT has the capability to provide successful, long-term treatment of model aneurysms as well as facilitate aneurysm healing.
ContributorsRiley, Celeste (Author) / Vernon, Brent L (Thesis advisor) / Preul, Mark C (Committee member) / Frakes, David (Committee member) / Pauken, Christine (Committee member) / Massia, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
152291-Thumbnail Image.png
Description
Rabies disease remains enzootic among raccoons, skunks, foxes and bats in the United States. It is of primary concern for public-health agencies to control spatial spread of rabies in wildlife and its potential spillover infection of domestic animals and humans. Rabies is invariably fatal in wildlife if untreated, with a

Rabies disease remains enzootic among raccoons, skunks, foxes and bats in the United States. It is of primary concern for public-health agencies to control spatial spread of rabies in wildlife and its potential spillover infection of domestic animals and humans. Rabies is invariably fatal in wildlife if untreated, with a non-negligible incubation period. Understanding how this latency affects spatial spread of rabies in wildlife is the concern of chapter 2 and 3. Chapter 1 deals with the background of mathematical models for rabies and lists main objectives. In chapter 2, a reaction-diffusion susceptible-exposed-infected (SEI) model and a delayed diffusive susceptible-infected (SI) model are constructed to describe the same epidemic process -- rabies spread in foxes. For the delayed diffusive model a non-local infection term with delay is resulted from modeling the dispersal during incubation stage. Comparison is made regarding minimum traveling wave speeds of the two models, which are verified using numerical experiments. In chapter 3, starting with two Kermack and McKendrick's models where infectivity, death rate and diffusion rate of infected individuals can depend on the age of infection, the asymptotic speed of spread $c^\ast$ for the cumulated force of infection can be analyzed. For the special case of fixed incubation period, the asymptotic speed of spread is governed by the same integral equation for both models. Although explicit solutions for $c^\ast$ are difficult to obtain, assuming that diffusion coefficient of incubating animals is small, $c^\ast$ can be estimated in terms of model parameter values. Chapter 4 considers the implementation of realistic landscape in simulation of rabies spread in skunks and bats in northeast Texas. The Finite Element Method (FEM) is adopted because the irregular shapes of realistic landscape naturally lead to unstructured grids in the spatial domain. This implementation leads to a more accurate description of skunk rabies cases distributions.
ContributorsLiu, Hao (Author) / Kuang, Yang (Thesis advisor) / Jackiewicz, Zdzislaw (Committee member) / Lanchier, Nicolas (Committee member) / Smith, Hal (Committee member) / Thieme, Horst (Committee member) / Arizona State University (Publisher)
Created2013
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
152367-Thumbnail Image.png
Description
Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing (DSP) applications. Most of the current efforts in DSP education focus on building tools to facilitate understanding of the mathematical principles. However, there is a disconnect between real-world data processing problems and the material presented in a DSP course. Sophisticated mobile interfaces and apps can potentially play a crucial role in providing a hands-on-experience with modern DSP applications to students. In this work, a new paradigm of DSP learning is explored by building an interactive easy-to-use health monitoring application for use in DSP courses. This is motivated by the increasing commercial interest in employing mobile phones for real-time health monitoring tasks. The idea is to exploit the computational abilities of the Android platform to build m-Health modules with sensor interfaces. In particular, appropriate sensing modalities have been identified, and a suite of software functionalities have been developed. Within the existing framework of the AJDSP app, a graphical programming environment, interfaces to on-board and external sensor hardware have also been developed to acquire and process physiological data. The set of sensor signals that can be monitored include electrocardiogram (ECG), photoplethysmogram (PPG), accelerometer signal, and galvanic skin response (GSR). The proposed m-Health modules can be used to estimate parameters such as heart rate, oxygen saturation, step count, and heart rate variability. A set of laboratory exercises have been designed to demonstrate the use of these modules in DSP courses. The app was evaluated through several workshops involving graduate and undergraduate students in signal processing majors at Arizona State University. The usefulness of the software modules in enhancing student understanding of signals, sensors and DSP systems were analyzed. Student opinions about the app and the proposed m-health modules evidenced the merits of integrating tools for mobile sensing and processing in a DSP curriculum, and familiarizing students with challenges in modern data-driven applications.
ContributorsRajan, Deepta (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151857-Thumbnail Image.png
Description
Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo

Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo magnetic resonance imaging (MRI) of local drug delivery is performed to visualize and quantify the time resolved distribution of MRI contrast agents. I find it is possible to visualize contrast agent distributions in near real time from local delivery vehicles using MRI. Three dimensional T1 maps are processed to produce in vivo concentration maps of contrast agent for individual animal models. The method for obtaining concentration maps is analyzed to estimate errors introduced at various steps in the process. The method is used to evaluate different controlled release vehicles, vehicle placement, and type of surgical wound in rabbits as a model for antimicrobial delivery to orthopaedic infection sites. I are able to see differences between all these factors; however, all images show that contrast agent remains fairly local to the wound site and do not distribute to tissues far from the implant in therapeutic concentrations. I also produce a mathematical model that investigates important mechanisms in the transport of antimicrobials in a wound environment. It is determined from both the images and the mathematical model that antimicrobial distribution in an orthopaedic wounds is dependent on both diffusive and convective mechanisms. Furthermore, I began development of MRI visible therapeutic agents to examine active drug distributions. I hypothesize that this work can be developed into a non-invasive, patient specific, clinical tool to evaluate the success of interventional procedures using local drug delivery vehicles.
ContributorsGiers, Morgan (Author) / Caplan, Michael R (Thesis advisor) / Massia, Stephen P (Committee member) / Frakes, David (Committee member) / McLaren, Alex C. (Committee member) / Vernon, Brent L (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151306-Thumbnail Image.png
Description
Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of

Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of microscale sensors that are used for sensing applications, ranging from whole-body monitoring down to molecular sensing. Unfortunately, sensors are often developed without regard to how they will be integrated into biological systems. The complexities of integration are underappreciated. Integration involves more than simply making electrical connections. Interfacing microscale sensors with biological environments requires numerous considerations with respect to the creation of compatible packaging, the management of biological reagents, and the act of combining technologies with different dimensions and material properties. Recent advances in microfluidics, especially the proliferation of soft lithography manufacturing methods, have established the groundwork for creating systems that may solve many of the problems inherent to sensor-fluidic interaction. The adaptation of microelectronics manufacturing methods, such as Complementary Metal-Oxide-Semiconductor (CMOS) and Microelectromechanical Systems (MEMS) processes, allows the creation of a complete biological sensing system with integrated sensors and readout circuits. Combining these technologies is an obstacle to forming complete sensor systems. This dissertation presents new approaches for the design, fabrication, and integration of microscale sensors and microelectronics with microfluidics. The work addresses specific challenges, such as combining commercial manufacturing processes into biological systems and developing microscale sensors in these processes. This work is exemplified through a feedback-controlled microfluidic pH system to demonstrate the integration capabilities of microscale sensors for autonomous microenvironment control.
ContributorsWelch, David (Author) / Blain Christen, Jennifer (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Frakes, David (Committee member) / LaBelle, Jeffrey (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2012