Matching Items (98)
Filtering by

Clear all filters

158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
ContributorsRamakrishna, Raksha (Author) / Scaglione, Anna (Thesis advisor) / Cochran, Douglas (Committee member) / Spanias, Andreas (Committee member) / Vittal, Vijay (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2020
158717-Thumbnail Image.png
Description
Semantic image segmentation has been a key topic in applications involving image processing and computer vision. Owing to the success and continuous research in the field of deep learning, there have been plenty of deep learning-based segmentation architectures that have been designed for various tasks. In this thesis, deep-learning architectures

Semantic image segmentation has been a key topic in applications involving image processing and computer vision. Owing to the success and continuous research in the field of deep learning, there have been plenty of deep learning-based segmentation architectures that have been designed for various tasks. In this thesis, deep-learning architectures for a specific application in material science; namely the segmentation process for the non-destructive study of the microstructure of Aluminum Alloy AA 7075 have been developed. This process requires the use of various imaging tools and methodologies to obtain the ground-truth information. The image dataset obtained using Transmission X-ray microscopy (TXM) consists of raw 2D image specimens captured from the projections at every beam scan. The segmented 2D ground-truth images are obtained by applying reconstruction and filtering algorithms before using a scientific visualization tool for segmentation. These images represent the corrosive behavior caused by the precipitates and inclusions particles on the Aluminum AA 7075 alloy. The study of the tools that work best for X-ray microscopy-based imaging is still in its early stages.

In this thesis, the underlying concepts behind Convolutional Neural Networks (CNNs) and state-of-the-art Semantic Segmentation architectures have been discussed in detail. The data generation and pre-processing process applied to the AA 7075 Data have also been described, along with the experimentation methodologies performed on the baseline and four other state-of-the-art Segmentation architectures that predict the segmented boundaries from the raw 2D images. A performance analysis based on various factors to decide the best techniques and tools to apply Semantic image segmentation for X-ray microscopy-based imaging was also conducted.
ContributorsBarboza, Daniel (Author) / Turaga, Pavan (Thesis advisor) / Chawla, Nikhilesh (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2020
158215-Thumbnail Image.png
Description
4H-SiC has been widely used in many applications. All of these benefit from its extremely high critical electric field and good electron mobility. For example, 4H-SiC possesses a critical field ten times higher than that of Si, which allows high-voltage blocking layers composed of 4H-SiC to be approximately a tenth

4H-SiC has been widely used in many applications. All of these benefit from its extremely high critical electric field and good electron mobility. For example, 4H-SiC possesses a critical field ten times higher than that of Si, which allows high-voltage blocking layers composed of 4H-SiC to be approximately a tenth the thickness of a comparable Si device. This, in turn, reduces the device on-resistance and power losses while maintaining the same high blocking capability.

Unfortunately, commercial TCAD tools like Sentaurus and Silvaco Atlas are based on the effective mass approximation, while most 4H-SiC devices are not operated under low electric field, so the parabolic-like band approximation does not hold anymore. Hence, to get more accurate and reliable simulation results, full-band analysis is needed. The first step in the development of a full-band device simulator is the calculation of the band structure. In this work, the empirical pseudopotential method (EPM) is adopted. The next task in the sequence is the calculation of the scattering rates. Acoustic, non-polar optical phonon, polar optical phonon and Coulomb scattering are considered. Coulomb scattering is treated in real space using the particle-particle-particle-mesh (P3M) approach. The third task is coupling the bulk full-band solver with a 3D Poisson equation solver to generate a full-band device simulator.

For proof-of-concept of the methodology adopted here, a 3D resistor is simulated first. From the resistor simulations, the low-field electron mobility dependence upon Coulomb scattering in 4H-SiC devices is extracted. The simulated mobility results agree very well with available experimental data. Next, a 3D VDMOS is simulated. The nature of the physical processes occurring in both steady-state and transient conditions are revealed for the two generations of 3D VDMOS devices being considered in the study.

Due to its comprehensive nature, the developed tool serves as a basis for future investigation of 4H-SiC power devices.
ContributorsCheng, Chi-Yin (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M (Thesis advisor) / Ponce, Fernando (Committee member) / Zhao, Yuji (Committee member) / Arizona State University (Publisher)
Created2020
157697-Thumbnail Image.png
Description
The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on

The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on blur detection and its application to image enhancement.

This work proposes a spatially-varying defocus blur detection based on the quotient of spectral bands; additionally, to avoid the use of computationally intensive algorithms for the segmentation of foreground and background regions, a global threshold defined using weak textured regions on the input image is proposed. Quantitative results expressed in the precision-recall space as well as qualitative results overperform current state-of-the-art algorithms while keeping the computational requirements at competitive levels.

Imperfections in the curvature of lenses can lead to image radial distortion (IRD). Computer vision applications can be drastically affected by IRD. This work proposes a novel robust radial distortion correction algorithm based on alternate optimization using two cost functions tailored for the estimation of the center of distortion and radial distortion coefficients. Qualitative and quantitative results show the competitiveness of the proposed algorithm.

Blur is one of the causes of visual discomfort in stereopsis. Sharpening applying traditional algorithms can produce an interdifference which causes eyestrain and visual fatigue for the viewer. A sharpness enhancement method for stereo images that incorporates binocular vision cues and depth information is presented. Perceptual evaluation and quantitative results based on the metric of interdifference deviation are reported; results of the proposed algorithm are competitive with state-of-the-art stereo algorithms.

Digital images and videos are produced every day in astonishing amounts. Consequently, the market-driven demand for higher quality content is constantly increasing which leads to the need of image quality assessment (IQA) methods. A training-free, no-reference image sharpness assessment method based on the singular value decomposition of perceptually-weighted normalized-gradients of relevant pixels in the input image is proposed. Results over six subject-rated publicly available databases show competitive performance when compared with state-of-the-art algorithms.
ContributorsAndrade Rodas, Juan Manuel (Author) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2019
157840-Thumbnail Image.png
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is

Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
ContributorsLohit, Suhas Anand (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
ContributorsMitra, Sinjini (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
161561-Thumbnail Image.png
Description
A distributed wireless sensor network (WSN) is a network of a large number of lowcost,multi-functional sensors with power, bandwidth, and memory constraints, operating in remote environments with sensing and communication capabilities. WSNs are a source for a large amount of data and due to the inherent communication and resource constraints, developing a distributed

A distributed wireless sensor network (WSN) is a network of a large number of lowcost,multi-functional sensors with power, bandwidth, and memory constraints, operating in remote environments with sensing and communication capabilities. WSNs are a source for a large amount of data and due to the inherent communication and resource constraints, developing a distributed algorithms to perform statistical parameter estimation and data analysis is necessary. In this work, consensus based distributed algorithms are developed for distributed estimation and processing over WSNs. Firstly, a distributed spectral clustering algorithm to group the sensors based on the location attributes is developed. Next, a distributed max consensus algorithm robust to additive noise in the network is designed. Furthermore, distributed spectral radius estimation algorithms for analog, as well as, digital communication models are developed. The proposed algorithms work for any connected graph topologies. Theoretical bounds are derived and simulation results supporting the theory are also presented.
ContributorsMuniraju, Gowtham (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Berisha, Visar (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2021
156919-Thumbnail Image.png
Description
Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today,

Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained.

The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies.

In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data.

In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets.

In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.
ContributorsKanberoglu, Berkay (Author) / Frakes, David (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018