Matching Items (34)
155155-Thumbnail Image.png
Description
Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and

Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and must be processed before use. In this work, a model of a single-pixel compressive video camera is used to explore the problems of performing inference based on these undersampled measurements. Three broad types of inference from CS measurements are considered: recovery of video frames, target tracking, and object classification/detection. Potential applications include automated surveillance, autonomous navigation, and medical imaging and diagnosis.



Recovery of CS video frames is far more complex than still images, which are known to be (approximately) sparse in a linear basis such as the discrete cosine transform. By combining sparsity of individual frames with an optical flow-based model of inter-frame dependence, the perceptual quality and peak signal to noise ratio (PSNR) of reconstructed frames is improved. The efficacy of this approach is demonstrated for the cases of \textit{a priori} known image motion and unknown but constant image-wide motion.



Although video sequences can be reconstructed from CS measurements, the process is computationally costly. In autonomous systems, this reconstruction step is unnecessary if higher-level conclusions can be drawn directly from the CS data. A tracking algorithm is described and evaluated which can hold target vehicles at very high levels of compression where reconstruction of video frames fails. The algorithm performs tracking by detection using a particle filter with likelihood given by a maximum average correlation height (MACH) target template model.



Motivated by possible improvements over the MACH filter-based likelihood estimation of the tracking algorithm, the application of deep learning models to detection and classification of compressively sensed images is explored. In tests, a Deep Boltzmann Machine trained on CS measurements outperforms a naive reconstruct-first approach.



Taken together, progress in these three areas of CS inference has the potential to lower system cost and improve performance, opening up new applications of CS video cameras.
ContributorsBraun, Henry Carlton (Author) / Turaga, Pavan K (Thesis advisor) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2016
155174-Thumbnail Image.png
Description
Monitoring vital physiological signals, such as heart rate, blood pressure and breathing pattern, are basic requirements in the diagnosis and management of various diseases. Traditionally, these signals are measured only in hospital and clinical settings. An important recent trend is the development of portable devices for tracking these physiological signals

Monitoring vital physiological signals, such as heart rate, blood pressure and breathing pattern, are basic requirements in the diagnosis and management of various diseases. Traditionally, these signals are measured only in hospital and clinical settings. An important recent trend is the development of portable devices for tracking these physiological signals non-invasively by using optical methods. These portable devices, when combined with cell phones, tablets or other mobile devices, provide a new opportunity for everyone to monitor one’s vital signs out of clinic.

This thesis work develops camera-based systems and algorithms to monitor several physiological waveforms and parameters, without having to bring the sensors in contact with a subject. Based on skin color change, photoplethysmogram (PPG) waveform is recorded, from which heart rate and pulse transit time are obtained. Using a dual-wavelength illumination and triggered camera control system, blood oxygen saturation level is captured. By monitoring shoulder movement using differential imaging processing method, respiratory information is acquired, including breathing rate and breathing volume. Ballistocardiogram (BCG) is obtained based on facial feature detection and motion tracking. Blood pressure is further calculated from simultaneously recorded PPG and BCG, based on the time difference between these two waveforms.

The developed methods have been validated by comparisons against reference devices and through pilot studies. All of the aforementioned measurements are conducted without any physical contact between sensors and subjects. The work presented herein provides alternative solutions to track one’s health and wellness under normal living condition.
ContributorsShao, Dangdang (Author) / Tao, Nongjian (Thesis advisor) / Li, Baoxin (Committee member) / Hekler, Eric (Committee member) / Karam, Lina (Committee member) / Arizona State University (Publisher)
Created2016
135973-Thumbnail Image.png
Description
Imaging technologies such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) collect Fourier data and then process the data to form images. Because images are piecewise smooth, the Fourier partial sum (i.e. direct inversion of the Fourier data) yields a poor approximation, with spurious oscillations forming at the

Imaging technologies such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) collect Fourier data and then process the data to form images. Because images are piecewise smooth, the Fourier partial sum (i.e. direct inversion of the Fourier data) yields a poor approximation, with spurious oscillations forming at the interior edges of the image and reduced accuracy overall. This is the well known Gibbs phenomenon and many attempts have been made to rectify its effects. Previous algorithms exploited the sparsity of edges in the underlying image as a constraint with which to optimize for a solution with reduced spurious oscillations. While the sparsity enforcing algorithms are fairly effective, they are sensitive to several issues, including undersampling and noise. Because of the piecewise nature of the underlying image, we theorize that projecting the solution onto the wavelet basis would increase the overall accuracy. Thus in this investigation we develop an algorithm that continues to exploit the sparsity of edges in the underlying image while also seeking to represent the solution using the wavelet rather than Fourier basis. Our method successfully decreases the effect of the Gibbs phenomenon and provides a good approximation for the underlying image. The primary advantages of our method is its robustness to undersampling and perturbations in the optimization parameters.
ContributorsFan, Jingjing (Co-author) / Mead, Ryan (Co-author) / Gelb, Anne (Thesis director) / Platte, Rodrigo (Committee member) / Archibald, Richard (Committee member) / School of Music (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
148467-Thumbnail Image.png
Description

This creative project is an extension of the work being done as part of Senior Design in<br/>developing the See-Through Car Pillar, a system designed to render the forward car pillars in a car<br/>invisible to the driver so they can have an unobstructed view utilizing displays, sensors, and a<br/>computer. The first

This creative project is an extension of the work being done as part of Senior Design in<br/>developing the See-Through Car Pillar, a system designed to render the forward car pillars in a car<br/>invisible to the driver so they can have an unobstructed view utilizing displays, sensors, and a<br/>computer. The first half of the paper provides the motivation, design and progress of the project, <br/>while the latter half provides a literature survey on current automobile trends, the viability of the<br/>See-Through Car Pillar as a product in the market through case studies, and alternative designs and <br/>technologies that also might address the problem statement.

ContributorsRoy, Delwyn J (Author) / Thornton, Trevor (Thesis director) / Kozicki, Michael (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147616-Thumbnail Image.png
Description

The Fourier representation of a signal or image is equivalent to its native representation in the sense that the signal or image can be reconstructed exactly from its Fourier transform. The Fourier transform is generally complex-valued, and each value of the Fourier spectrum thus possesses both magnitude and phase. Degradation

The Fourier representation of a signal or image is equivalent to its native representation in the sense that the signal or image can be reconstructed exactly from its Fourier transform. The Fourier transform is generally complex-valued, and each value of the Fourier spectrum thus possesses both magnitude and phase. Degradation of signals and images when Fourier phase information is lost or corrupted has been studied extensively in the signal processing research literature, as has reconstruction of signals and images using only Fourier magnitude information. This thesis focuses on the case of images, where it examines the visual effect of quantifiable levels of Fourier phase loss and, in particular, studies the merits of introducing varying degrees of phase information in a classical iterative algorithm for reconstructing an image from its Fourier magnitude.

ContributorsShi, Yiting (Author) / Cochran, Douglas (Thesis director) / Jones, Scott (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
165522-Thumbnail Image.png
Description
Now that home security systems are readily available at a low cost, these systems are commonly being installed to watch over homes and loved ones. These systems are fairly easy to install and can provide 4k Ultra HD resolution. The user can configure the sensitivity and areas to monitor and

Now that home security systems are readily available at a low cost, these systems are commonly being installed to watch over homes and loved ones. These systems are fairly easy to install and can provide 4k Ultra HD resolution. The user can configure the sensitivity and areas to monitor and receive object detection notifications. Unfortunately, once the customer starts to use the system, they often find that the notifications are overwhelming and soon turn them off. After hearing the same experience from multiple friends and family I thought it would be a good topic for my thesis. I examined a top selling security system sold at a bulk retail store and have implemented improved detection techniques that advance object detection and reduce false notifications. The additional algorithms will support the processing of both near real-time streams and saved video file processing, which existing security systems do not include.
ContributorsBustillos, Adriana (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
192256-Thumbnail Image.png
Description
Recent satellite and remote sensing innovations have led to an eruption in the amount and variety of geospatial ice data available to the public, permitting in-depth study of high-definition ice imagery and digital elevation models (DEMs) for the goal of safe maritime navigation and climate monitoring. Few researchers have investigated

Recent satellite and remote sensing innovations have led to an eruption in the amount and variety of geospatial ice data available to the public, permitting in-depth study of high-definition ice imagery and digital elevation models (DEMs) for the goal of safe maritime navigation and climate monitoring. Few researchers have investigated texture in optical imagery as a predictive measure of Arctic sea ice thickness due to its cloud pollution, uniformity, and lack of distinct features that make it incompatible with standard feature descriptors. Thus, this paper implements three suitable ice texture metrics on 1640 Arctic sea ice image patches, namely (1) variance pooling, (2) gray-level co-occurrence matrices (GLCMs), and (3) textons, to assess the feasibly of a texture-based ice thickness regression model. Results indicate that of all texture metrics studied, only one GLCM statistic, namely homogeneity, bore any correlation (0.15) to ice freeboard.
ContributorsWarner, Hailey (Author) / Cochran, Douglas (Thesis director) / Jayasuria, Suren (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Electrical Engineering Program (Contributor)
Created2024-05
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
153713-Thumbnail Image.png
Description
Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival rate of 90%. Early diagnosis of colorectal cancer is achievable if colorectal polyps, a possible precursor to cancer, are detected and removed before developing into malignancy.

The preferred method for polyp detection and removal is optical colonoscopy. A colonoscopic procedure consists of two phases: (1) insertion phase during which a flexible endoscope (a flexible tube with a tiny video camera at the tip) is advanced via the anus and then gradually to the end of the colon--called the cecum, and (2) withdrawal phase during which the endoscope is gradually withdrawn while colonoscopists examine the colon wall to find and remove polyps. Colonoscopy is an effective procedure and has led to a significant decline in the incidence and mortality of colon cancer. However, despite many screening and therapeutic advantages, 1 out of every 4 polyps and 1 out of 13 colon cancers are missed during colonoscopy.

There are many factors that contribute to missed polyps and cancers including poor colon preparation, inadequate navigational skills, and fatigue. Poor colon preparation results in a substantial portion of colon covered with fecal content, hindering a careful examination of the colon. Inadequate navigational skills can prevent a colonoscopist from examining hard-to-reach regions of the colon that may contain a polyp. Fatigue can manifest itself in the performance of a colonoscopist by decreasing diligence and vigilance during procedures. Lack of vigilance may prevent a colonoscopist from detecting the polyps that briefly appear in the colonoscopy videos. Lack of diligence may result in hasty examination of the colon that is likely to miss polyps and lesions.

To reduce polyp and cancer miss rates, this research presents a quality assurance system with 3 components. The first component is an automatic polyp detection system that highlights the regions with suspected polyps in colonoscopy videos. The goal is to encourage more vigilance during procedures. The suggested polyp detection system consists of several novel modules: (1) a new patch descriptor that characterizes image appearance around boundaries more accurately and more efficiently than widely-used patch descriptors such HoG, LBP, and Daisy; (2) A 2-stage classification framework that is able to enhance low level image features prior to classification. Unlike the traditional way of image classification where a single patch undergoes the processing pipeline, our system fuses the information extracted from a pair of patches for more accurate edge classification; (3) a new vote accumulation scheme that robustly localizes objects with curvy boundaries in fragmented edge maps. Our voting scheme produces a probabilistic output for each polyp candidate but unlike the existing methods (e.g., Hough transform) does not require any predefined parametric model of the object of interest; (4) and a unique three-way image representation coupled with convolutional neural networks (CNNs) for classifying the polyp candidates. Our image representation efficiently captures a variety of features such as color, texture, shape, and temporal information and significantly improves the performance of the subsequent CNNs for candidate classification. This contrasts with the exiting methods that mainly rely on a subset of the above image features for polyp detection. Furthermore, this research is the first to investigate the use of CNNs for polyp detection in colonoscopy videos.

The second component of our quality assurance system is an automatic image quality assessment for colonoscopy. The goal is to encourage more diligence during procedures by warning against hasty and low quality colon examination. We detect a low quality colon examination by identifying a number of consecutive non-informative frames in videos. We base our methodology for detecting non-informative frames on two key observations: (1) non-informative frames

most often show an unrecognizable scene with few details and blurry edges and thus their information can be locally compressed in a few Discrete Cosine Transform (DCT) coefficients; however, informative images include much more details and their information content cannot be summarized by a small subset of DCT coefficients; (2) information content is spread all over the image in the case of informative frames, whereas in non-informative frames, depending on image artifacts and degradation factors, details may appear in only a few regions. We use the former observation in designing our global features and the latter in designing our local image features. We demonstrated that the suggested new features are superior to the existing features based on wavelet and Fourier transforms.

The third component of our quality assurance system is a 3D visualization system. The goal is to provide colonoscopists with feedback about the regions of the colon that have remained unexamined during colonoscopy, thereby helping them improve their navigational skills. The suggested system is based on a new 3D reconstruction algorithm that combines depth and position information for 3D reconstruction. We propose to use a depth camera and a tracking sensor to obtain depth and position information. Our system contrasts with the existing works where the depth and position information are unreliably estimated from the colonoscopy frames. We conducted a use case experiment, demonstrating that the suggested 3D visualization system can determine the unseen regions of the navigated environment. However, due to technology limitations, we were not able to evaluate our 3D visualization system using a phantom model of the colon.
ContributorsTajbakhsh, Nima (Author) / Liang, Jianming (Thesis advisor) / Greenes, Robert (Committee member) / Scotch, Matthew (Committee member) / Arizona State University (Publisher)
Created2015
154269-Thumbnail Image.png
Description
Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of

Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of in situ hybridization (ISH) images of gene expression over seven different mouse brain developmental stages. Studying mouse brain models helps us understand the gene expressions in human brains. This atlas collects about thousands of genes and now they are manually annotated by biologists. Due to the high labor cost of manual annotation, investigating an efficient approach to perform automated gene expression annotation on mouse brain images becomes necessary. In this thesis, a novel efficient approach based on machine learning framework is proposed. Features are extracted from raw brain images, and both binary classification and multi-class classification models are built with some supervised learning methods. To generate features, one of the most adopted methods in current research effort is to apply the bag-of-words (BoW) algorithm. However, both the efficiency and the accuracy of BoW are not outstanding when dealing with large-scale data. Thus, an augmented sparse coding method, which is called Stochastic Coordinate Coding, is adopted to generate high-level features in this thesis. In addition, a new multi-label classification model is proposed in this thesis. Label hierarchy is built based on the given brain ontology structure. Experiments have been conducted on the atlas and the results show that this approach is efficient and classifies the images with a relatively higher accuracy.
ContributorsZhao, Xinlin (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2016