Matching Items (22)
Filtering by

Clear all filters

Description
Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements

Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements to improve microchannel design and characterize existing devices. Methods such as micro particle imaging velocimetry (microPIV) and micro particle tracking velocimetry (microPTV) are mature and established methods for characterization of steady 2D flow fields. Increasingly complex microdevices require techniques that measure unsteady and/or three dimensional velocity fields. This dissertation presents a method for three-dimensional velocimetry of unsteady microflows based on spinning disk confocal microscopy and depth scanning of a microvolume. High-speed 2D unsteady velocity fields are resolved by acquiring images of particle motion using a high-speed CMOS camera and confocal microscope. The confocal microscope spatially filters out of focus light using a rotating disk of pinholes placed in the imaging path, improving the ability of the system to resolve unsteady microPIV measurements by improving the image and correlation signal to noise ratio. For 3D3C measurements, a piezo-actuated objective positioner quickly scans the depth of the microvolume and collects 2D image slices, which are stacked into 3D images. Super resolution microPIV interrogates these 3D images using microPIV as a predictor field for tracking individual particles with microPTV. The 3D3C diagnostic is demonstrated by measuring a pressure driven flow in a three-dimensional expanding microchannel. The experimental velocimetry data acquired at 30 Hz with instantaneous spatial resolution of 4.5 by 4.5 by 4.5 microns agrees well with a computational model of the flow field. The technique allows for isosurface visualization of time resolved 3D3C particle motion and high spatial resolution velocity measurements without requiring a calibration step or reconstruction algorithms. Several applications are investigated, including 3D quantitative fluorescence imaging of isotachophoresis plugs advecting through a microchannel and the dynamics of reaction induced colloidal crystal deposition.
ContributorsKlein, Steven Adam (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald (Committee member) / Chen, Kangping (Committee member) / Devasenathipathy, Shankar (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2011
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
152367-Thumbnail Image.png
Description
Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing (DSP) applications. Most of the current efforts in DSP education focus on building tools to facilitate understanding of the mathematical principles. However, there is a disconnect between real-world data processing problems and the material presented in a DSP course. Sophisticated mobile interfaces and apps can potentially play a crucial role in providing a hands-on-experience with modern DSP applications to students. In this work, a new paradigm of DSP learning is explored by building an interactive easy-to-use health monitoring application for use in DSP courses. This is motivated by the increasing commercial interest in employing mobile phones for real-time health monitoring tasks. The idea is to exploit the computational abilities of the Android platform to build m-Health modules with sensor interfaces. In particular, appropriate sensing modalities have been identified, and a suite of software functionalities have been developed. Within the existing framework of the AJDSP app, a graphical programming environment, interfaces to on-board and external sensor hardware have also been developed to acquire and process physiological data. The set of sensor signals that can be monitored include electrocardiogram (ECG), photoplethysmogram (PPG), accelerometer signal, and galvanic skin response (GSR). The proposed m-Health modules can be used to estimate parameters such as heart rate, oxygen saturation, step count, and heart rate variability. A set of laboratory exercises have been designed to demonstrate the use of these modules in DSP courses. The app was evaluated through several workshops involving graduate and undergraduate students in signal processing majors at Arizona State University. The usefulness of the software modules in enhancing student understanding of signals, sensors and DSP systems were analyzed. Student opinions about the app and the proposed m-health modules evidenced the merits of integrating tools for mobile sensing and processing in a DSP curriculum, and familiarizing students with challenges in modern data-driven applications.
ContributorsRajan, Deepta (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151306-Thumbnail Image.png
Description
Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of

Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of microscale sensors that are used for sensing applications, ranging from whole-body monitoring down to molecular sensing. Unfortunately, sensors are often developed without regard to how they will be integrated into biological systems. The complexities of integration are underappreciated. Integration involves more than simply making electrical connections. Interfacing microscale sensors with biological environments requires numerous considerations with respect to the creation of compatible packaging, the management of biological reagents, and the act of combining technologies with different dimensions and material properties. Recent advances in microfluidics, especially the proliferation of soft lithography manufacturing methods, have established the groundwork for creating systems that may solve many of the problems inherent to sensor-fluidic interaction. The adaptation of microelectronics manufacturing methods, such as Complementary Metal-Oxide-Semiconductor (CMOS) and Microelectromechanical Systems (MEMS) processes, allows the creation of a complete biological sensing system with integrated sensors and readout circuits. Combining these technologies is an obstacle to forming complete sensor systems. This dissertation presents new approaches for the design, fabrication, and integration of microscale sensors and microelectronics with microfluidics. The work addresses specific challenges, such as combining commercial manufacturing processes into biological systems and developing microscale sensors in these processes. This work is exemplified through a feedback-controlled microfluidic pH system to demonstrate the integration capabilities of microscale sensors for autonomous microenvironment control.
ContributorsWelch, David (Author) / Blain Christen, Jennifer (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Frakes, David (Committee member) / LaBelle, Jeffrey (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2012
152074-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized

Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized by the translational and rotational velocity and effective diffusivity. In a uniform environment the long-time motion of a motor can be fully characterized by the effective diffusivity. In this work it is shown that when motors possess both translational and rotational velocity the motor transitions from a short-time diffusivity to a long-time diffusivity at a time of pi/w. The short-time diffusivities are two to three orders of magnitude larger than the diffusivity of a Brownian sphere of the same size, increase linearly with concentration, and scale as v^2/2w. The measured long-time diffusivities are five times lower than the short-time diffusivities, scale as v^2/{2Dr [1 + (w/Dr )^2]}, and exhibit a maximum as a function of concentration. The variation of a colloid's velocity and effective diffusivity to its local environment (e.g. fuel concentration) suggests that the motors can accumulate in a bounded system, analogous to biological chemokinesis. Chemokinesis of organisms is the non-uniform equilibrium concentration that arises from a bounded random walk of swimming organisms in a chemical concentration gradient. In non-swimming organisms we term this response diffusiokinesis. We show that particles that migrate only by Brownian thermal motion are capable of achieving non-uniform pseudo equilibrium distribution in a diffusivity gradient. The concentration is a result of a bounded random-walk process where at any given time a larger percentage of particles can be found in the regions of low diffusivity than in regions of high diffusivity. Individual particles are not trapped in any given region but at equilibrium the net flux between regions is zero. For Brownian particles the gradient in diffusivity is achieved by creating a viscosity gradient in a microfluidic device. The distribution of the particles is described by the Fokker-Planck equation for variable diffusivity. The strength of the probe concentration gradient is proportional to the strength of the diffusivity gradient and inversely proportional to the mean probe diffusivity in the channel in accordance with the no flux condition at steady state. This suggests that Brownian colloids, natural or synthetic, will concentrate in a bounded system in response to a gradient in diffusivity and that the magnitude of the response is proportional to the magnitude of the gradient in diffusivity divided by the mean diffusivity in the channel.
ContributorsMarine, Nathan Arasmus (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald J (Committee member) / Frakes, David (Committee member) / Phelan, Patrick E (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
150896-Thumbnail Image.png
Description
Human operators have difficulty driving cranes quickly, accurately, and safely because of the slow response of heavy crane structures, non-intuitive control interfaces, and payload oscillations. Recently, a novel hand-motion crane control system has been proposed to improve performance by coupling an intuitive control interface with an element that reduces the

Human operators have difficulty driving cranes quickly, accurately, and safely because of the slow response of heavy crane structures, non-intuitive control interfaces, and payload oscillations. Recently, a novel hand-motion crane control system has been proposed to improve performance by coupling an intuitive control interface with an element that reduces the complex oscillatory behavior of the payload. Hand-motion control allows operators to drive a crane by simply moving a hand-held radio-frequency tag through the desired path. Real-time location sensors are used to track the movements of the tag and the tag position is used in a feedback control loop to drive the crane. An input shaper is added to eliminate dangerous payload oscillations. However, tag position measurements are corrupted by noise. It is important to understand the noise properties so that appropriate filters can be designed to mitigate the effects of noise and improve tracking accuracy. This work discusses implementing filtering techniques to address the issue of noise in the operating environment. Five different filters are used on experimentally-acquired tag trajectories to reduce noise. The filtered trajectories are then used to drive crane simulations. Filter performance is evaluated with respect to the energy usage of the crane trolley, the settling time of the crane payload oscillations, and the safety corridor of the crane trajectory. The effects of filter window lengths on these parameters are also investigated. An adaptive filtering technique, namely the Kalman filter, adapts to the noise characteristics of the workspace to minimize the tag tracking error and performs better than the other filtering techniques examined.
ContributorsRagunathan, Sudarshan (Author) / Frakes, David (Thesis advisor) / Singhose, William (Committee member) / Tillery, Stephen Helms (Committee member) / Arizona State University (Publisher)
Created2012
150437-Thumbnail Image.png
Description
Magnetic Resonance Imaging (MRI) is limited in speed and resolution by the inherently low Signal to Noise Ratio (SNR) of the underlying signal. Advances in sampling efficiency are required to support future improvements in scan time and resolution. SNR efficiency is improved by sampling data for a larger proportion of

Magnetic Resonance Imaging (MRI) is limited in speed and resolution by the inherently low Signal to Noise Ratio (SNR) of the underlying signal. Advances in sampling efficiency are required to support future improvements in scan time and resolution. SNR efficiency is improved by sampling data for a larger proportion of total imaging time. This is challenging as these acquisitions are typically subject to artifacts such as blurring and distortions. The current work proposes a set of tools to help with the creation of different types of SNR efficient scans. An SNR efficient pulse sequence providing diffusion imaging data with full brain coverage and minimal distortion is first introduced. The proposed method acquires single-shot, low resolution image slabs which are then combined to reconstruct the full volume. An iterative deblurring algorithm allowing the lengthening of spiral SPoiled GRadient echo (SPGR) acquisition windows in the presence of rapidly varying off-resonance fields is then presented. Finally, an efficient and practical way of collecting 3D reformatted data is proposed. This method constitutes a good tradeoff between 2D and 3D neuroimaging in terms of scan time and data presentation. These schemes increased the SNR efficiency of currently existing methods and constitute key enablers for the development of SNR efficient MRI.
ContributorsAboussouan, Eric (Author) / Frakes, David (Thesis advisor) / Pipe, James (Thesis advisor) / Debbins, Joseph (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2011
150720-Thumbnail Image.png
Description
Current treatment methods for cerebral aneurysms are providing life-saving measures for patients suffering from these blood vessel wall protrusions; however, the drawbacks present unfortunate circumstances in the invasive procedure or with efficient occlusion of the aneurysms. With the advancement of medical devices, liquid-to-solid gelling materials that could be delivered endovascularly

Current treatment methods for cerebral aneurysms are providing life-saving measures for patients suffering from these blood vessel wall protrusions; however, the drawbacks present unfortunate circumstances in the invasive procedure or with efficient occlusion of the aneurysms. With the advancement of medical devices, liquid-to-solid gelling materials that could be delivered endovascularly have gained interest. The development of these systems stems from the need to circumvent surgical methods and the requirement for improved occlusion of aneurysms to prevent recanalization and potential complications. The work presented herein reports on a liquid-to-solid gelling material, which undergoes gelation via dual mechanisms. Using a temperature-responsive polymer, poly(N-isopropylacrylamide) (poly(NIPAAm), the gelling system can transition from a solution at low temperatures to a gel at body temperature (physical gelation). Additionally, by conjugating reactive functional groups onto the polymers, covalent cross-links can be formed via chemical reaction between the two moieties (chemical gelation). The advantage of this gelling system comprises of its water-based properties as well as the ability of the physical and chemical gelation to occur within physiological conditions. By developing the polymer gelling system in a ground-up approach via synthesis, its added benefit is the capability of modifying the properties of the system as needed for particular applications, in this case for embolization of cerebral aneurysms. The studies provided in this doctoral work highlight the synthesis, characterization and testing of these polymer gelling systems for occlusion of aneurysms. Conducted experiments include thermal, mechanical, structural and chemical characterization, as well as analysis of swelling, degradation, kinetics, cytotoxicity, in vitro glass models and in vivo swine study. Data on thermoresponsive poly(NIPAAm) indicated that the phase transition it undertakes comes as a result of the polymer chains associating as temperature is increased. Poly(NIPAAm) was functionalized with thiols and vinyls to provide for added chemical cross-linking. By combining both modes of gelation, physical and chemical, a gel with reduced creep flow and increased strength was developed. Being waterborne, the gels demonstrated excellent biocompatibility and were easily delivered via catheters and injected within aneurysms, without undergoing degradation. The dual gelling polymer systems demonstrated potential in use as embolic agents for cerebral aneurysm embolization.
ContributorsBearat, Hanin H (Author) / Vernon, Brent L (Thesis advisor) / Frakes, David (Committee member) / Massia, Stephen (Committee member) / Pauken, Christine (Committee member) / Preul, Mark (Committee member) / Solis, Francisco (Committee member) / Arizona State University (Publisher)
Created2012