Matching Items (532)
Filtering by

Clear all filters

152146-Thumbnail Image.png
Description
Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost

Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost effective and convenient tools for such analysis. Scientific literature is full of novel sensor ideas but it is challenging to develop a working device, which are few. These challenges include trace level detection, presence of hundreds of interfering compounds, excessive humidity, different sampling regulations and personal variability. To meet these challenges as well as deliver a low cost solution, optical sensors based on specific colorimetric chemical reactions on mesoporous membranes have been developed. Sensor hardware utilizing cost effective and ubiquitously available light source (LED) and detector (webcam/photo diodes) has been developed and optimized for sensitive detection. Sample conditioning mouthpiece suitable for portable sensors is developed and integrated. The sensors are capable of communication with mobile phones realizing the idea of m-health for easy personal health monitoring in free living conditions. Nitric oxide and Acetone are chosen as analytes of interest. Nitric oxide levels in the breath correlate with lung inflammation which makes it useful for asthma management. Acetone levels increase during ketosis resulting from fat metabolism in the body. Monitoring breath acetone thus provides useful information to people with type1 diabetes, epileptic children on ketogenic diets and people following fitness plans for weight loss.
ContributorsPrabhakar, Amlendu (Author) / Tao, Nongjian (Thesis advisor) / Forzani, Erica (Committee member) / Lindsay, Stuart (Committee member) / Arizona State University (Publisher)
Created2013
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152520-Thumbnail Image.png
Description
High temperature CO2 perm-selective membranes offer potential for uses in various processes for CO2 separation. Recently, efforts are reported on fabrication of dense ceramic-carbonate dual-phase membranes. The membranes provide selective permeation to CO2 and exhibit high permeation flux at high temperature. Research on transport mechanism demonstrates that gas transport for

High temperature CO2 perm-selective membranes offer potential for uses in various processes for CO2 separation. Recently, efforts are reported on fabrication of dense ceramic-carbonate dual-phase membranes. The membranes provide selective permeation to CO2 and exhibit high permeation flux at high temperature. Research on transport mechanism demonstrates that gas transport for ceramic-carbonate dual-phase membrane is rate limited by ion transport in ceramic support. Reducing membrane thickness proves effective to improve permeation flux. This dissertation reports strategy to prepare thin ceramic-carbonate dual-phase membranes to increase CO2 permeance. The work also presents characteristics and gas permeation properties of the membranes. Thin ceramic-carbonate dual-phase membrane was constructed with an asymmetric porous support consisting of a thin small-pore ionic conducting ceramic top-layer and a large pore base support. The base support must be carbonate non-wettable to ensure formation of supported dense, thin membrane. Macroporous yttria-stabilized zirconia (YSZ) layer was prepared on large pore Bi1.5Y0.3Sm0.2O3-δ (BYS) base support using suspension coating method. Thin YSZ-carbonate dual-phase membrane (d-YSZ/BYS) was prepared via direct infiltrating Li/Na/K carbonate mixtures into top YSZ layers. The thin membrane of 10 μm thick offered a CO2 flux 5-10 times higher than the thick dual-phase membranes. Ce0.8Sm0.2O1.9 (SDC) exhibited highest CO2 flux and long-term stability and was chosen as ceramic support for membrane performance improvement. Porous SDC layers were co-pressed on base supports using SDC and BYS powder mixtures which provided better sintering comparability and carbonate non-wettability. Thin SDC-carbonate dual-phase membrane (d-SDC/SDC60BYS40) of 150 μm thick was synthesized on SDC60BYS40. CO2 permeation flux for d-SDC/SDC60BYS40 exhibited increasing dependence on temperature and partial pressure gradient. The flux was higher than other SDC-based dual-phase membranes. Reducing membrane thickness proves effective to increase CO2 permeation flux for the dual-phase membrane.
ContributorsLu, Bo (Author) / Lin, Yuesheng (Thesis advisor) / Crozier, Peter (Committee member) / Herrmann, Macus (Committee member) / Forzani, Erica (Committee member) / Lind, Mary Laura (Committee member) / Arizona State University (Publisher)
Created2014
152802-Thumbnail Image.png
Description
A new photocatalytic material was synthesized to investigate its performance for the photoreduction of carbon dioxide (CO2) in the presence of water vapor (H2O) to valuable products such as carbon monoxide (CO) and methane (CH4). The performance was studied using a gas chromatograph (GC) with a flame ionization detector (FID)

A new photocatalytic material was synthesized to investigate its performance for the photoreduction of carbon dioxide (CO2) in the presence of water vapor (H2O) to valuable products such as carbon monoxide (CO) and methane (CH4). The performance was studied using a gas chromatograph (GC) with a flame ionization detector (FID) and a thermal conductivity detector (TCD). The new photocatalytic material was an ionic liquid functionalized reduced graphite oxide (IL-RGO (high conductive surface))-TiO2 (photocatalyst) nanocomposite. Brunauer-Emmett-Teller (BET), X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, and UV-vis absorption spectroscopy techniques were employed to characterize the new catalyst. In the series of experiments performed, the nanocomposite material was confined in a UV-quartz batch reactor, exposed to CO2 and H2O and illuminated by UV light. The primary product formed was CO with a maximum production ranging from 0.18-1.02 µmol(gcatalyst-hour)-1 for TiO2 and 0.41-1.41 µmol(gcatalyst-hour)-1 for IL-RGO-TiO2. A trace amount of CH4 was also formed with its maximum ranging from 0.009-0.01 µmol(gcatalyst-hour)-1 for TiO2 and 0.01-0.04 µmol(gcatalyst-hour)-1 for IL-RGO-TiO2. A series of background experiments were conducted and results showed that; (a) the use of a ionic liquid functionalized reduced graphite oxide -TiO2 produced more products as compared to commercial TiO2, (b) the addition of methanol as a hole scavenger boosted the production of CO but not CH4, (c) a higher and lower reduction time of IL-RGO as compared to the usual 24 hours of reduction presented basically the same production of CO and CH4, (d) the positive effect of having an ionic liquid was demonstrated by the double production of CO obtained for IL-RGO-TiO2 as compared to RGO-TiO2 and (e) a change in the amount of IL-RGO in the IL-RGO-TiO2 represented a small difference in the CO production but not in the CH4 production. This work ultimately demonstrated the huge potential of the utility of a UV-responsive ionic liquid functionalized reduced graphite oxide-TiO2 nano-composite for the reduction of CO2 in the presence of H2O for the production of fuels.
ContributorsCastañeda Flores, Alejandro (Author) / Andino, Jean M (Thesis advisor) / Forzani, Erica (Committee member) / Torres, Cesar (Committee member) / Arizona State University (Publisher)
Created2014
153334-Thumbnail Image.png
Description
Three dimensional (3-D) ultrasound is safe, inexpensive, and has been shown to drastically improve system ease-of-use, diagnostic efficiency, and patient throughput. However, its high computational complexity and resulting high power consumption has precluded its use in hand-held applications.

In this dissertation, algorithm-architecture co-design techniques that aim to make hand-held 3-D ultrasound

Three dimensional (3-D) ultrasound is safe, inexpensive, and has been shown to drastically improve system ease-of-use, diagnostic efficiency, and patient throughput. However, its high computational complexity and resulting high power consumption has precluded its use in hand-held applications.

In this dissertation, algorithm-architecture co-design techniques that aim to make hand-held 3-D ultrasound a reality are presented. First, image enhancement methods to improve signal-to-noise ratio (SNR) are proposed. These include virtual source firing techniques and a low overhead digital front-end architecture using orthogonal chirps and orthogonal Golay codes.

Second, algorithm-architecture co-design techniques to reduce the power consumption of 3-D SAU imaging systems is presented. These include (i) a subaperture multiplexing strategy and the corresponding apodization method to alleviate the signal bandwidth bottleneck, and (ii) a highly efficient iterative delay calculation method to eliminate complex operations such as multiplications, divisions and square-root in delay calculation during beamforming. These techniques were used to define Sonic Millip3De, a 3-D die stacked architecture for digital beamforming in SAU systems. Sonic Millip3De produces 3-D high resolution images at 2 frames per second with system power consumption of 15W in 45nm technology.

Third, a new beamforming method based on separable delay decomposition is proposed to reduce the computational complexity of the beamforming unit in an SAU system. The method is based on minimizing the root-mean-square error (RMSE) due to delay decomposition. It reduces the beamforming complexity of a SAU system by 19x while providing high image fidelity that is comparable to non-separable beamforming. The resulting modified Sonic Millip3De architecture supports a frame rate of 32 volumes per second while maintaining power consumption of 15W in 45nm technology.

Next a 3-D plane-wave imaging system that utilizes both separable beamforming and coherent compounding is presented. The resulting system has computational complexity comparable to that of a non-separable non-compounding baseline system while significantly improving contrast-to-noise ratio and SNR. The modified Sonic Millip3De architecture is now capable of generating high resolution images at 1000 volumes per second with 9-fire-angle compounding.
ContributorsYang, Ming (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Frakes, David (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
153096-Thumbnail Image.png
Description
Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for

Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for adaptive sequential behavioral interventions using dynamical systems modeling, control engineering principles and formal optimization methods. A novel gestational weight gain (GWG) intervention involving multiple intervention components and featuring a pre-defined, clinically relevant set of sequence rules serves as an excellent example of a sequential behavioral intervention; it is examined in detail in this research.

 

A comprehensive dynamical systems model for the GWG behavioral interventions is developed, which demonstrates how to integrate a mechanistic energy balance model with dynamical formulations of behavioral models, such as the Theory of Planned Behavior and self-regulation. Self-regulation is further improved with different advanced controller formulations. These model-based controller approaches enable the user to have significant flexibility in describing a participant's self-regulatory behavior through the tuning of controller adjustable parameters. The dynamic simulation model demonstrates proof of concept for how self-regulation and adaptive interventions influence GWG, how intra-individual and inter-individual variability play a critical role in determining intervention outcomes, and the evaluation of decision rules.

 

Furthermore, a novel intervention decision paradigm using Hybrid Model Predictive Control framework is developed to generate sequential decision policies in the closed-loop. Clinical considerations are systematically taken into account through a user-specified dosage sequence table corresponding to the sequence rules, constraints enforcing the adjustment of one input at a time, and a switching time strategy accounting for the difference in frequency between intervention decision points and sampling intervals. Simulation studies illustrate the potential usefulness of the intervention framework.

The final part of the dissertation presents a model scheduling strategy relying on gain-scheduling to address nonlinearities in the model, and a cascade filter design for dual-rate control system is introduced to address scenarios with variable sampling rates. These extensions are important for addressing real-life scenarios in the GWG intervention.
ContributorsDong, Yuwen (Author) / Rivera, Daniel E (Thesis advisor) / Dai, Lenore (Committee member) / Forzani, Erica (Committee member) / Rege, Kaushal (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2014
153181-Thumbnail Image.png
Description
We report the synthesis of novel boronic acid-containing metal-organic frameworks (MOFs), which was synthesized via solvothermal synthesis of cobalt nitride with 3,5-Dicarboxyphenylboronic acid (3,5-DCPBC). Powder X-ray diffraction and BET surface area analysis have been used to verify the successful synthesis of this microporous material.

We have also made the attempts

We report the synthesis of novel boronic acid-containing metal-organic frameworks (MOFs), which was synthesized via solvothermal synthesis of cobalt nitride with 3,5-Dicarboxyphenylboronic acid (3,5-DCPBC). Powder X-ray diffraction and BET surface area analysis have been used to verify the successful synthesis of this microporous material.

We have also made the attempts of using zinc nitride and copper nitride as metal sources to synthesize the boronic acid-containing MOFs. However, the attempts were not successful. The possible reason is the existence of copper and zinc ions catalyzed the decomposition of 3,5-Dicarboxyphenylboronic acid, forming isophthalic acid. The ended product has been proved to be isophthalic acid crystals by the single crystal X-ray diffraction. The effects of solvents, reaction temperature, and added bases were investigated. The addition of triethylamine has been shown to tremendously improve the sample crystallinity by facilitating ligand deprotonation
ContributorsYu, Jiuhao (Author) / Mu, Bin (Thesis advisor) / Forzani, Erica (Committee member) / Nielsen, David (Committee member) / Arizona State University (Publisher)
Created2014
153270-Thumbnail Image.png
Description
Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer

Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer vision,

robotics, reconnaissance, astrophotography, surveillance and automotive applications.

The images captured from such cameras can be corrected for their distortion if the

cameras are calibrated and the distortion function is determined. Calibration also allows

fisheye cameras to be used in tasks involving metric scene measurement, metric

scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.

This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
ContributorsKashyap Takmul Purushothama Raju, Vinay (Author) / Karam, Lina (Thesis advisor) / Turaga, Pavan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
153249-Thumbnail Image.png
Description
In this thesis we consider the problem of facial expression recognition (FER) from video sequences. Our method is based on subspace representations and Grassmann manifold based learning. We use Local Binary Pattern (LBP) at the frame level for representing the facial features. Next we develop a model to represent the

In this thesis we consider the problem of facial expression recognition (FER) from video sequences. Our method is based on subspace representations and Grassmann manifold based learning. We use Local Binary Pattern (LBP) at the frame level for representing the facial features. Next we develop a model to represent the video sequence in a lower dimensional expression subspace and also as a linear dynamical system using Autoregressive Moving Average (ARMA) model. As these subspaces lie on Grassmann space, we use Grassmann manifold based learning techniques such as kernel Fisher Discriminant Analysis with Grassmann kernels for classification. We consider six expressions namely, Angry (AN), Disgust (Di), Fear (Fe), Happy (Ha), Sadness (Sa) and Surprise (Su) for classification. We perform experiments on extended Cohn-Kanade (CK+) facial expression database to evaluate the expression recognition performance. Our method demonstrates good expression recognition performance outperforming other state of the art FER algorithms. We achieve an average recognition accuracy of 97.41% using a method based on expression subspace, kernel-FDA and Support Vector Machines (SVM) classifier. By using a simpler classifier, 1-Nearest Neighbor (1-NN) along with kernel-FDA, we achieve a recognition accuracy of 97.09%. We find that to process a group of 19 frames in a video sequence, LBP feature extraction requires majority of computation time (97 %) which is about 1.662 seconds on the Intel Core i3, dual core platform. However when only 3 frames (onset, middle and peak) of a video sequence are used, the computational complexity is reduced by about 83.75 % to 260 milliseconds at the expense of drop in the recognition accuracy to 92.88 %.
ContributorsYellamraju, Anirudh (Author) / Chakrabarti, Chaitali (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Karam, Lina (Committee member) / Arizona State University (Publisher)
Created2014