Matching Items (14)
Filtering by

Clear all filters

153241-Thumbnail Image.png
Description
Thousands of high-resolution images are generated each day. Detecting and analyzing variations in these images are key steps in image understanding. This work focuses on spatial and multitemporal

visual change detection and its applications in multi-temporal synthetic aperture radar (SAR) images.

The Canny edge detector is one of the most widely-used edge

Thousands of high-resolution images are generated each day. Detecting and analyzing variations in these images are key steps in image understanding. This work focuses on spatial and multitemporal

visual change detection and its applications in multi-temporal synthetic aperture radar (SAR) images.

The Canny edge detector is one of the most widely-used edge detection algorithms due to its superior performance in terms of SNR and edge localization and only one response to a single edge. In this work, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance as compared to the original frame-level Canny algorithm. The resulting block-based algorithm has significantly reduced memory requirements and can achieve a significantly reduced latency. Furthermore, the proposed algorithm can be easily integrated with other block-based image processing systems. In addition, quantitative evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images.

In the context of multi-temporal SAR images for earth monitoring applications, one critical issue is the detection of changes occurring after a natural or anthropic disaster. In this work, we propose a novel similarity measure for automatic change detection using a pair of SAR images

acquired at different times and apply it in both the spatial and wavelet domains. This measure is based on the evolution of the local statistics of the image between two dates. The local statistics are modeled as a Gaussian Mixture Model (GMM), which is more suitable and flexible to approximate the local distribution of the SAR image with distinct land-cover typologies. Tests on real datasets show that the proposed detectors outperform existing methods in terms of the quality of the similarity maps, which are assessed using the receiver operating characteristic (ROC) curves, and in terms of the total error rates of the final change detection maps. Furthermore, we proposed a new

similarity measure for automatic change detection based on a divisive normalization transform in order to reduce the computation complexity. Tests show that our proposed DNT-based change detector

exhibits competitive detection performance while achieving lower computational complexity as compared to previously suggested methods.
ContributorsXu, Qian (Author) / Karam, Lina J (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
155148-Thumbnail Image.png
Description
Visual attention (VA) is the study of mechanisms that allow the human visual system (HVS) to selectively process relevant visual information. This work focuses on the subjective and objective evaluation of computational VA models for the distortion-free case as well as in the presence of image distortions.



Existing VA models are

Visual attention (VA) is the study of mechanisms that allow the human visual system (HVS) to selectively process relevant visual information. This work focuses on the subjective and objective evaluation of computational VA models for the distortion-free case as well as in the presence of image distortions.



Existing VA models are traditionally evaluated by using VA metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though there is a considerable number of objective VA metrics, there exists no study that validates that these metrics are adequate for the evaluation of VA models. This work constructs a VA Quality (VAQ) Database by subjectively assessing the prediction performance of VA models on distortion-free images. Additionally, shortcomings in existing metrics are discussed through illustrative examples and a new metric that uses local weights based on fixation density and that overcomes these flaws, is proposed. The proposed VA metric outperforms all other popular existing metrics in terms of the correlation with subjective ratings.



In practice, the image quality is affected by a host of factors at several stages of the image processing pipeline such as acquisition, compression, and transmission. However, none of the existing studies have discussed the subjective and objective evaluation of visual saliency models in the presence of distortion. In this work, a Distortion-based Visual Attention Quality (DVAQ) subjective database is constructed to evaluate the quality of VA maps for images in the presence of distortions. For creating this database, saliency maps obtained from images subjected to various types of distortions, including blur, noise and compression, and varying levels of distortion severity are rated by human observers in terms of their visual resemblance to corresponding ground-truth fixation density maps. The performance of traditionally used as well as recently proposed VA metrics are evaluated by correlating their scores with the human subjective ratings. In addition, an objective evaluation of 20 state-of-the-art VA models is performed using the top-performing VA metrics together with a study of how the VA models’ prediction performance changes with different types and levels of distortions.
ContributorsGide, Milind Subhash (Author) / Karam, Lina J (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2016
158654-Thumbnail Image.png
Description
In recent years, the widespread use of deep neural networks (DNNs) has facilitated great improvements in performance for computer vision tasks like image classification and object recognition. In most realistic computer vision applications, an input image undergoes some form of image distortion such as blur and additive noise during image

In recent years, the widespread use of deep neural networks (DNNs) has facilitated great improvements in performance for computer vision tasks like image classification and object recognition. In most realistic computer vision applications, an input image undergoes some form of image distortion such as blur and additive noise during image acquisition or transmission. Deep networks trained on pristine images perform poorly when tested on such distortions. DNN predictions have also been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, so-called universal adversarial perturbations are image-agnostic perturbations that can be added to any image and can fool a target network into making erroneous predictions. This work proposes selective DNN feature regeneration to improve the robustness of existing DNNs to image distortions and universal adversarial perturbations.

In the context of common naturally occurring image distortions, a metric is proposed to identify the most susceptible DNN convolutional filters and rank them in order of the highest gain in classification accuracy upon correction. The proposed approach called DeepCorrect applies small stacks of convolutional layers with residual connections at the output of these ranked filters and trains them to correct the most distortion-affected filter activations, whilst leaving the rest of the pre-trained filter outputs in the network unchanged. Performance results show that applying DeepCorrect models for common vision tasks significantly improves the robustness of DNNs against distorted images and outperforms other alternative approaches.

In the context of universal adversarial perturbations, departing from existing defense strategies that work mostly in the image domain, a novel and effective defense which only operates in the DNN feature domain is presented. This approach identifies pre-trained convolutional features that are most vulnerable to adversarial perturbations and deploys trainable feature regeneration units which transform these DNN filter activations into resilient features that are robust to universal perturbations. Regenerating only the top 50% adversarially susceptible activations in at most 6 DNN layers and leaving all remaining DNN activations unchanged can outperform existing defense strategies across different network architectures and across various universal attacks.
ContributorsBorkar, Tejas Shyam (Author) / Karam, Lina J (Thesis advisor) / Turaga, Pavan (Committee member) / Jayasuriya, Suren (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2020
157934-Thumbnail Image.png
Description
Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and,

Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and, consequently, leading to an increase in the purchase of private cars. Also, road safety was impacted by numerous factors such as Driving Under Influence (DUI), driver’s distraction due to the increase in the use of mobile devices while driving. These factors led to an increasing need for an Advanced Driver Assistance System (ADAS) to help the driver stay aware of the environment and to improve road safety.

EcoCAR3 is one of the Advanced Vehicle Technology Competitions, sponsored by the United States Department of Energy (DoE) and managed by Argonne National Laboratory in partnership with the North American automotive industry. Students are challenged beyond the traditional classroom environment in these competitions, where they redesign a donated production vehicle to improve energy efficiency and to meet emission standards while maintaining the features that are attractive to the customer, including but not limited to performance, consumer acceptability, safety, and cost.

This thesis presents a driver assistance system interface that was implemented as part of EcoCAR3, including the adopted sensors, hardware and software components, system implementation, validation, and testing. The implemented driver assistance system uses a combination of range measurement sensors to determine the distance, relative location, & the relative velocity of obstacles and surrounding objects together with a computer vision algorithm for obstacle detection and classification. The sensor system and vision system were tested individually and then combined within the overall system. Also, a visual and audio feedback system was designed and implemented to provide timely feedback for the driver as an attempt to enhance situational awareness and improve safety.

Since the driver assistance system was designed and developed as part of a DoE sponsored competition, the system needed to satisfy competition requirements and rules. This work attempted to optimize the system in terms of performance, robustness, and cost while satisfying these constraints.
ContributorsBalaji, Venkatesh (Author) / Karam, Lina J (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2019