Matching Items (65)
Filtering by

Clear all filters

Description
Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements

Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements to improve microchannel design and characterize existing devices. Methods such as micro particle imaging velocimetry (microPIV) and micro particle tracking velocimetry (microPTV) are mature and established methods for characterization of steady 2D flow fields. Increasingly complex microdevices require techniques that measure unsteady and/or three dimensional velocity fields. This dissertation presents a method for three-dimensional velocimetry of unsteady microflows based on spinning disk confocal microscopy and depth scanning of a microvolume. High-speed 2D unsteady velocity fields are resolved by acquiring images of particle motion using a high-speed CMOS camera and confocal microscope. The confocal microscope spatially filters out of focus light using a rotating disk of pinholes placed in the imaging path, improving the ability of the system to resolve unsteady microPIV measurements by improving the image and correlation signal to noise ratio. For 3D3C measurements, a piezo-actuated objective positioner quickly scans the depth of the microvolume and collects 2D image slices, which are stacked into 3D images. Super resolution microPIV interrogates these 3D images using microPIV as a predictor field for tracking individual particles with microPTV. The 3D3C diagnostic is demonstrated by measuring a pressure driven flow in a three-dimensional expanding microchannel. The experimental velocimetry data acquired at 30 Hz with instantaneous spatial resolution of 4.5 by 4.5 by 4.5 microns agrees well with a computational model of the flow field. The technique allows for isosurface visualization of time resolved 3D3C particle motion and high spatial resolution velocity measurements without requiring a calibration step or reconstruction algorithms. Several applications are investigated, including 3D quantitative fluorescence imaging of isotachophoresis plugs advecting through a microchannel and the dynamics of reaction induced colloidal crystal deposition.
ContributorsKlein, Steven Adam (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald (Committee member) / Chen, Kangping (Committee member) / Devasenathipathy, Shankar (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2011
150080-Thumbnail Image.png
Description
Treatment of cerebral aneurysms using non-invasive methods has existed for decades. Since the advent of modern endovascular techniques, advancements to embolic materials have largely focused on improving platinum coil technology. However, the recent development of Onyx®, a liquid-delivery precipitating polymer system, has opened the door for a new class of

Treatment of cerebral aneurysms using non-invasive methods has existed for decades. Since the advent of modern endovascular techniques, advancements to embolic materials have largely focused on improving platinum coil technology. However, the recent development of Onyx®, a liquid-delivery precipitating polymer system, has opened the door for a new class of embolic materials--liquid-fill systems. These liquid-fill materials have the potential to provide better treatment outcomes than platinum coils. Initial clinical use of Onyx has proven promising, but not without substantial drawbacks, such as co-delivery of angiotoxic compounds and an extremely technical delivery procedure. This work focuses on formulation, characterization and testing of a novel liquid-to-solid gelling polymer system, based on poly(propylene glycol) diacrylate (PPODA) and pentaerythritol tetrakis(3-mercaptopropionate) (QT). The PPODA-QT system bypasses difficulties associated with Onyx embolization, yet still maintains non-invasive liquid delivery--exhibiting the properties of an ideal embolic material for cerebral aneurysm embolization. To allow for material visibility during clinical delivery, an embolic material must be radio-opaque. The PPODA-QT system was formulated with commercially available contrast agents and the gelling kinetics were studied, as a complete understanding of the gelling process is vital for clinical use. These PPODA-QT formulations underwent in vitro characterization of material properties including cytotoxicity, swelling, and degradation behaviors. Formulation and characterization tests led to an optimized PPODA-QT formulation that was used in subsequent in vivo testing. PPODA-QT formulated with the liquid contrast agent ConrayTM was used in the first in vivo studies. These studies employed a swine aneurysm model to assess initial biocompatibility and test different delivery strategies of PPODA-QT. Results showed good biocompatibility and a suitable delivery strategy, providing justification for further in vivo testing. PPODA-QT was then used in a small scale pilot study to gauge long-term effectiveness of the material in a clinically-relevant aneurysm model. Results from the pilot study showed that PPODA-QT has the capability to provide successful, long-term treatment of model aneurysms as well as facilitate aneurysm healing.
ContributorsRiley, Celeste (Author) / Vernon, Brent L (Thesis advisor) / Preul, Mark C (Committee member) / Frakes, David (Committee member) / Pauken, Christine (Committee member) / Massia, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
151722-Thumbnail Image.png
Description
Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.
ContributorsFink, Alex M (Author) / Spanias, Andreas S (Thesis advisor) / Cook, Perry R. (Committee member) / Turaga, Pavan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
151857-Thumbnail Image.png
Description
Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo

Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo magnetic resonance imaging (MRI) of local drug delivery is performed to visualize and quantify the time resolved distribution of MRI contrast agents. I find it is possible to visualize contrast agent distributions in near real time from local delivery vehicles using MRI. Three dimensional T1 maps are processed to produce in vivo concentration maps of contrast agent for individual animal models. The method for obtaining concentration maps is analyzed to estimate errors introduced at various steps in the process. The method is used to evaluate different controlled release vehicles, vehicle placement, and type of surgical wound in rabbits as a model for antimicrobial delivery to orthopaedic infection sites. I are able to see differences between all these factors; however, all images show that contrast agent remains fairly local to the wound site and do not distribute to tissues far from the implant in therapeutic concentrations. I also produce a mathematical model that investigates important mechanisms in the transport of antimicrobials in a wound environment. It is determined from both the images and the mathematical model that antimicrobial distribution in an orthopaedic wounds is dependent on both diffusive and convective mechanisms. Furthermore, I began development of MRI visible therapeutic agents to examine active drug distributions. I hypothesize that this work can be developed into a non-invasive, patient specific, clinical tool to evaluate the success of interventional procedures using local drug delivery vehicles.
ContributorsGiers, Morgan (Author) / Caplan, Michael R (Thesis advisor) / Massia, Stephen P (Committee member) / Frakes, David (Committee member) / McLaren, Alex C. (Committee member) / Vernon, Brent L (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151306-Thumbnail Image.png
Description
Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of

Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of microscale sensors that are used for sensing applications, ranging from whole-body monitoring down to molecular sensing. Unfortunately, sensors are often developed without regard to how they will be integrated into biological systems. The complexities of integration are underappreciated. Integration involves more than simply making electrical connections. Interfacing microscale sensors with biological environments requires numerous considerations with respect to the creation of compatible packaging, the management of biological reagents, and the act of combining technologies with different dimensions and material properties. Recent advances in microfluidics, especially the proliferation of soft lithography manufacturing methods, have established the groundwork for creating systems that may solve many of the problems inherent to sensor-fluidic interaction. The adaptation of microelectronics manufacturing methods, such as Complementary Metal-Oxide-Semiconductor (CMOS) and Microelectromechanical Systems (MEMS) processes, allows the creation of a complete biological sensing system with integrated sensors and readout circuits. Combining these technologies is an obstacle to forming complete sensor systems. This dissertation presents new approaches for the design, fabrication, and integration of microscale sensors and microelectronics with microfluidics. The work addresses specific challenges, such as combining commercial manufacturing processes into biological systems and developing microscale sensors in these processes. This work is exemplified through a feedback-controlled microfluidic pH system to demonstrate the integration capabilities of microscale sensors for autonomous microenvironment control.
ContributorsWelch, David (Author) / Blain Christen, Jennifer (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Frakes, David (Committee member) / LaBelle, Jeffrey (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2012
152074-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized

Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized by the translational and rotational velocity and effective diffusivity. In a uniform environment the long-time motion of a motor can be fully characterized by the effective diffusivity. In this work it is shown that when motors possess both translational and rotational velocity the motor transitions from a short-time diffusivity to a long-time diffusivity at a time of pi/w. The short-time diffusivities are two to three orders of magnitude larger than the diffusivity of a Brownian sphere of the same size, increase linearly with concentration, and scale as v^2/2w. The measured long-time diffusivities are five times lower than the short-time diffusivities, scale as v^2/{2Dr [1 + (w/Dr )^2]}, and exhibit a maximum as a function of concentration. The variation of a colloid's velocity and effective diffusivity to its local environment (e.g. fuel concentration) suggests that the motors can accumulate in a bounded system, analogous to biological chemokinesis. Chemokinesis of organisms is the non-uniform equilibrium concentration that arises from a bounded random walk of swimming organisms in a chemical concentration gradient. In non-swimming organisms we term this response diffusiokinesis. We show that particles that migrate only by Brownian thermal motion are capable of achieving non-uniform pseudo equilibrium distribution in a diffusivity gradient. The concentration is a result of a bounded random-walk process where at any given time a larger percentage of particles can be found in the regions of low diffusivity than in regions of high diffusivity. Individual particles are not trapped in any given region but at equilibrium the net flux between regions is zero. For Brownian particles the gradient in diffusivity is achieved by creating a viscosity gradient in a microfluidic device. The distribution of the particles is described by the Fokker-Planck equation for variable diffusivity. The strength of the probe concentration gradient is proportional to the strength of the diffusivity gradient and inversely proportional to the mean probe diffusivity in the channel in accordance with the no flux condition at steady state. This suggests that Brownian colloids, natural or synthetic, will concentrate in a bounded system in response to a gradient in diffusivity and that the magnitude of the response is proportional to the magnitude of the gradient in diffusivity divided by the mean diffusivity in the channel.
ContributorsMarine, Nathan Arasmus (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald J (Committee member) / Frakes, David (Committee member) / Phelan, Patrick E (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
150437-Thumbnail Image.png
Description
Magnetic Resonance Imaging (MRI) is limited in speed and resolution by the inherently low Signal to Noise Ratio (SNR) of the underlying signal. Advances in sampling efficiency are required to support future improvements in scan time and resolution. SNR efficiency is improved by sampling data for a larger proportion of

Magnetic Resonance Imaging (MRI) is limited in speed and resolution by the inherently low Signal to Noise Ratio (SNR) of the underlying signal. Advances in sampling efficiency are required to support future improvements in scan time and resolution. SNR efficiency is improved by sampling data for a larger proportion of total imaging time. This is challenging as these acquisitions are typically subject to artifacts such as blurring and distortions. The current work proposes a set of tools to help with the creation of different types of SNR efficient scans. An SNR efficient pulse sequence providing diffusion imaging data with full brain coverage and minimal distortion is first introduced. The proposed method acquires single-shot, low resolution image slabs which are then combined to reconstruct the full volume. An iterative deblurring algorithm allowing the lengthening of spiral SPoiled GRadient echo (SPGR) acquisition windows in the presence of rapidly varying off-resonance fields is then presented. Finally, an efficient and practical way of collecting 3D reformatted data is proposed. This method constitutes a good tradeoff between 2D and 3D neuroimaging in terms of scan time and data presentation. These schemes increased the SNR efficiency of currently existing methods and constitute key enablers for the development of SNR efficient MRI.
ContributorsAboussouan, Eric (Author) / Frakes, David (Thesis advisor) / Pipe, James (Thesis advisor) / Debbins, Joseph (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2011