Matching Items (176)
152122-Thumbnail Image.png
Description
Video denoising has been an important task in many multimedia and computer vision applications. Recent developments in the matrix completion theory and emergence of new numerical methods which can efficiently solve the matrix completion problem have paved the way for exploration of new techniques for some classical image processing tasks.

Video denoising has been an important task in many multimedia and computer vision applications. Recent developments in the matrix completion theory and emergence of new numerical methods which can efficiently solve the matrix completion problem have paved the way for exploration of new techniques for some classical image processing tasks. Recent literature shows that many computer vision and image processing problems can be solved by using the matrix completion theory. This thesis explores the application of matrix completion in video denoising. A state-of-the-art video denoising algorithm in which the denoising task is modeled as a matrix completion problem is chosen for detailed study. The contribution of this thesis lies in both providing extensive analysis to bridge the gap in existing literature on matrix completion frame work for video denoising and also in proposing some novel techniques to improve the performance of the chosen denoising algorithm. The chosen algorithm is implemented for thorough analysis. Experiments and discussions are presented to enable better understanding of the problem. Instability shown by the algorithm at some parameter values in a particular case of low levels of pure Gaussian noise is identified. Artifacts introduced in such cases are analyzed. A novel way of grouping structurally-relevant patches is proposed to improve the algorithm. Experiments show that this technique is useful, especially in videos containing high amounts of motion. Based on the observation that matrix completion is not suitable for denoising patches containing relatively low amount of image details, a framework is designed to separate patches corresponding to low structured regions from a noisy image. Experiments are conducted by not subjecting such patches to matrix completion, instead denoising such patches in a different way. The resulting improvement in performance suggests that denoising low structured patches does not require a complex method like matrix completion and in fact it is counter-productive to subject such patches to matrix completion. These results also indicate the inherent limitation of matrix completion to deal with cases in which noise dominates the structural properties of an image. A novel method for introducing priorities to the ranked patches in matrix completion is also presented. Results showed that this method yields improved performance in general. It is observed that the artifacts in presence of low levels of pure Gaussian noise appear differently after introducing priorities to the patches and the artifacts occur at a wider range of parameter values. Results and discussion suggesting future ways to explore this problem are also presented.
ContributorsMaguluri, Hima Bindu (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Claveau, Claude (Committee member) / Arizona State University (Publisher)
Created2013
150773-Thumbnail Image.png
Description
Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides

Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides real-time measurements of each PV module's voltage and current is considered. A fault detection algorithm formulated as a clustering problem and addressed using the robust minimum covariance determinant (MCD) estimator is described; its performance on simulated instances of arc and ground faults is evaluated. The algorithm is found to perform well on many types of faults commonly occurring in PV arrays. Among several types of detection algorithms considered, only the MCD shows high performance on both types of faults.
ContributorsBraun, Henry (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2012
150756-Thumbnail Image.png
Description
Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource

Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource management schemes for efficient workload distribution and sustainable operation for improving the energy efficiency, be developed and tested before implementation on an actual data center. The BlueTool project, provides such a state-of-the-art platform, both software and hardware, to design and analyze energy efficiency of data centers. The software platform, namely GDCSim uses cyber-physical approach to study the physical behavior of the data center in response to the management decisions by taking into account the heat recirculation patterns in the data center room. Such an approach yields best possible energy savings owing to the characterization of cyber-physical interactions and the ability of the resource management to take decisions based on physical behavior of data centers. The GDCSim mainly uses two Computational Fluid Dynamics (CFD) based cyber-physical models namely, Heat Recirculation Matrix (HRM) and Transient Heat Distribution Model (THDM) for thermal predictions based on different management schemes. They are generated using a model generator namely BlueSim. To ensure the accuracy of the thermal predictions using the GDCSim, the models, HRM and THDM and the model generator, BlueSim need to be validated experimentally. For this purpose, the hardware platform of the BlueTool project, namely the BlueCenter, a mini data center, can be used. As a part of this thesis, the HRM and THDM were generated using the BlueSim and experimentally validated using the BlueCenter. An average error of 4.08% was observed for BlueSim, 5.84% for HRM and 4.24% for THDM. Further, a high initial error was observed for transient thermal prediction, which is due to the inability of BlueSim to account for the heat retained by server components.
ContributorsGilbert, Rose Robin (Author) / Gupta, Sandeep K.S (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2012
151028-Thumbnail Image.png
Description
In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can

In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2012
151055-Thumbnail Image.png
Description
Air pollution is one of the biggest challenges people face today. It is closely related to people's health condition. The agencies set up standards to regulate the air pollution. However, many of the pollutants under the regulation level may still result in adverse health effect. On the other hand, it

Air pollution is one of the biggest challenges people face today. It is closely related to people's health condition. The agencies set up standards to regulate the air pollution. However, many of the pollutants under the regulation level may still result in adverse health effect. On the other hand, it is not clear the exact mechanism of air pollutants and its health effect. So it is difficult for the health centers to advise people how to prevent the air pollutant related diseases. It is of vital importance for both the agencies and the health centers to have a better understanding of the air pollution. Based on these needs, it is crucial to establish mobile health sensors for personal exposure assessment. Here, two sensing principles are illustrated: the tuning fork platform and the colorimetric platform. Mobile devices based on these principles have been built. The detections of ozone, NOX, carbon monoxide and formaldehyde have been shown. An integrated device of nitrogen dioxide and carbon monoxide is introduced. Fan is used for sample delivery instead pump and valves to reduce the size, cost and power consumption. Finally, the future work is discussed.
ContributorsWang, Rui (Author) / Tao, Nongjian (Thesis advisor) / Forzani, Erica (Committee member) / Zhang, Yanchao (Committee member) / Karam, Lina (Committee member) / Arizona State University (Publisher)
Created2012
150476-Thumbnail Image.png
Description
Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns.

Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns. However, architectures based on RC decomposition are not efficient for large input size data which have to be stored in external memories based Synchronous Dynamic RAM (SDRAM). In this dissertation, first an efficient architecture to implement 2-D DFT for large-sized input data is proposed. This architecture achieves very high throughput by exploiting the inherent parallelism due to a novel 2-D decomposition and by utilizing the row-wise burst access pattern of the SDRAM external memory. In addition, an automatic IP generator is provided for mapping this architecture onto a reconfigurable platform of Xilinx Virtex-5 devices. For a 2048x2048 input size, the proposed architecture is 1.96 times faster than RC decomposition based implementation under the same memory constraints, and also outperforms other existing implementations. While the proposed 2-D DFT IP can achieve high performance, its output is bit-reversed. For systems where the output is required to be in natural order, use of this DFT IP would result in timing overhead. To solve this problem, a new bandwidth-efficient MD DFT IP that is transpose-free and produces outputs in natural order is proposed. It is based on a novel decomposition algorithm that takes into account the output order, FPGA resources, and the characteristics of off-chip memory access. An IP generator is designed and integrated into an in-house FPGA development platform, AlgoFLEX, for easy verification and fast integration. The corresponding 2-D and 3-D DFT architectures are ported onto the BEE3 board and their performance measured and analyzed. The results shows that the architecture can maintain the maximum memory bandwidth throughout the whole procedure while avoiding matrix transpose operations used in most other MD DFT implementations. The proposed architecture has also been ported onto the Xilinx ML605 board. When clocked at 100 MHz, 2048x2048 images with complex single-precision can be processed in less than 27 ms. Finally, transpose-free imaging flows for range-Doppler algorithm (RDA) and chirp-scaling algorithm (CSA) in SAR imaging are proposed. The corresponding implementations take advantage of the memory access patterns designed for the MD DFT IP and have superior timing performance. The RDA and CSA flows are mapped onto a unified architecture which is implemented on an FPGA platform. When clocked at 100MHz, the RDA and CSA computations with data size 4096x4096 can be completed in 323ms and 162ms, respectively. This implementation outperforms existing SAR image accelerators based on FPGA and GPU.
ContributorsYu, Chi-Li (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2012
151270-Thumbnail Image.png
Description
The aim of this study was to investigate the microstructural sensitivity of the statistical distribution and diffusion kurtosis (DKI) models of non-monoexponential signal attenuation in the brain using diffusion-weighted MRI (DWI). We first developed a simulation of 2-D water diffusion inside simulated tissue consisting of semi-permeable cells and a variable

The aim of this study was to investigate the microstructural sensitivity of the statistical distribution and diffusion kurtosis (DKI) models of non-monoexponential signal attenuation in the brain using diffusion-weighted MRI (DWI). We first developed a simulation of 2-D water diffusion inside simulated tissue consisting of semi-permeable cells and a variable cell size. We simulated a DWI acquisition using a pulsed gradient spin echo (PGSE) pulse sequence, and fitted the models to the simulated DWI signals using b-values up to 2500 s/mm2. For comparison, we calculated the apparent diffusion coefficient (ADC) of the monoexponential model (b-value = 1000 s/mm2). In separate experiments, we varied the cell size (5-10-15 μ), cell volume fraction (0.50-0.65-0.80), and membrane permeability (0.001-0.01-0.1 mm/s) to study how the fitted parameters tracked simulated microstructural changes. The ADC was sensitive to all the simulated microstructural changes except the decrease in membrane permeability. The σstat of the statistical distribution model increased exclusively with a decrease in cell volume fraction. The Kapp of the DKI model increased exclusively with decreased cell size and decreased with increasing membrane permeability. These results suggest that the non-monoexponential models have different, specific microstructural sensitivity, and a combination of the models may give insights into the microstructural underpinning of tissue pathology. Faster PROPELLER DWI acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor and thus scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate (SAR) and short scan time can be achieved in Turboprop+ using a high turbo factor, while off-resonance related artifacts are minimized.
ContributorsLee, Chu-Yu (Author) / Debbins, Josef P (Thesis advisor) / Bennett, Kevin M (Thesis advisor) / Karam, Lina (Committee member) / Pipe, James G (Committee member) / Arizona State University (Publisher)
Created2012
151120-Thumbnail Image.png
Description
Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains a vision-threatening complication and one of the leading causes of

Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains a vision-threatening complication and one of the leading causes of blindness among American adults. Recent studies have shown that diagnosis based on digital retinal imaging has potential benefits over traditional face-to-face evaluation. Yet there is a dearth of computer-based systems that can match the level of performance achieved by ophthalmologists. This thesis takes a fresh perspective in developing a computer-based system aimed at improving diagnosis of DR images. These images are categorized into three classes according to their severity level. The proposed approach explores effective methods to classify new images and retrieve clinically-relevant images from a database with prior diagnosis information associated with them. Retrieval provides a novel way to utilize the vast knowledge in the archives of previously-diagnosed DR images and thereby improve a clinician's performance while classification can safely reduce the burden on DR screening programs and possibly achieve higher detection accuracy than human experts. To solve the three-class retrieval and classification problem, the approach uses a multi-class multiple-instance medical image retrieval framework that makes use of spectrally tuned color correlogram and steerable Gaussian filter response features. The results show better retrieval and classification performances than prior-art methods and are also observed to be of clinical and visual relevance.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
151092-Thumbnail Image.png
Description
Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for

Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for further processing, compressive cameras offer the advantage of direct acquisition of data in compressed domain and hence readily promise to find applicability in computer vision, particularly in environments hampered by limited communication bandwidths. However, despite the significant progress in theory and methods of compressive sensing, little headway has been made in developing systems for such applications by exploiting the merits of compressive sensing. In such a setting, we consider the problem of activity recognition, which is an important inference problem in many security and surveillance applications. Since all successful activity recognition systems involve detection of human, followed by recognition, a potential fully functioning system motivated by compressive camera would involve the tracking of human, which requires the reconstruction of atleast the initial few frames to detect the human. Once the human is tracked, the recognition part of the system requires only the features to be extracted from the tracked sequences, which can be the reconstructed images or the compressed measurements of such sequences. However, it is desirable in resource constrained environments that these features be extracted from the compressive measurements without reconstruction. Motivated by this, in this thesis, we propose a framework for understanding activities as a non-linear dynamical system, and propose a robust, generalizable feature that can be extracted directly from the compressed measurements without reconstructing the original video frames. The proposed feature is termed recurrence texture and is motivated from recurrence analysis of non-linear dynamical systems. We show that it is possible to obtain discriminative features directly from the compressed stream and show its utility in recognition of activities at very low data rates.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
136546-Thumbnail Image.png
Description
The generation of walking motion is one of the most vital functions of the human body because it allows us to be mobile in our environment. Unfortunately, numerous individuals suffer from gait impairment as a result of debilitating conditions like stroke, resulting in a serious loss of mobility. Our understanding

The generation of walking motion is one of the most vital functions of the human body because it allows us to be mobile in our environment. Unfortunately, numerous individuals suffer from gait impairment as a result of debilitating conditions like stroke, resulting in a serious loss of mobility. Our understanding of human gait is limited by the amount of research we conduct in relation to human walking mechanisms and their characteristics. In order to better understand these characteristics and the systems involved in the generation of human gait, it is necessary to increase the depth and range of research pertaining to walking motion. Specifically, there has been a lack of investigation into a particular area of human gait research that could potentially yield interesting conclusions about gait rehabilitation, which is the effect of surface stiffness on human gait. In order to investigate this idea, a number of studies have been conducted using experimental devices that focus on changing surface stiffness; however, these systems lack certain functionality that would be useful in an experimental scenario. To solve this problem and to investigate the effect of surface stiffness further, a system has been developed called the Variable Stiffness Treadmill system (VST). This treadmill system is a unique investigative tool that allows for the active control of surface stiffness. What is novel about this system is its ability to change the stiffness of the surface quickly, accurately, during the gait cycle, and throughout a large range of possible stiffness values. This type of functionality in an experimental system has never been implemented and constitutes a tremendous opportunity for valuable gait research in regard to the influence of surface stiffness. In this work, the design, development, and implementation of the Variable Stiffness Treadmill system is presented and discussed along with preliminary experimentation. The results from characterization testing demonstrate highly accurate stiffness control and excellent response characteristics for specific configurations. Initial indications from human experimental trials in relation to quantifiable effects from surface stiffness variation using the Variable Stiffness Treadmill system are encouraging.
ContributorsBarkan, Andrew Robert (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05