Filtering by
- All Subjects: engineering
we capture, store, receive, view, utilize, and share images. In image-based applications,
through different processing stages (e.g., acquisition, compression, and transmission), images
are subjected to different types of distortions which degrade their visual quality. Image
Quality Assessment (IQA) attempts to use computational models to automatically evaluate
and estimate the image quality in accordance with subjective evaluations. Moreover, with
the fast development of computer vision techniques, it is important in practice to extract
and understand the information contained in blurred images or regions.
The work in this dissertation focuses on reduced-reference visual quality assessment of
images and textures, as well as perceptual-based spatially-varying blur detection.
A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The
proposed method requires a very small number of reduced-reference (RR) features. Extensive
experiments performed on different benchmark databases demonstrate that the proposed
RRIQA method, delivers highly competitive performance as compared with the
state-of-the-art RRIQA models for both natural and texture images.
In the context of texture, the effect of texture granularity on the quality of synthesized
textures is studied. Moreover, two RR objective visual quality assessment methods that
quantify the perceived quality of synthesized textures are proposed. Performance evaluations
on two synthesized texture databases demonstrate that the proposed RR metrics outperforms
full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in
predicting the perceived visual quality of the synthesized textures.
Last but not least, an effective approach to address the spatially-varying blur detection
problem from a single image without requiring any knowledge about the blur type, level,
or camera settings is proposed. The evaluations of the proposed approach on a diverse
sets of blurry images with different blur types, levels, and content demonstrate that the
proposed algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.
Existing VA models are traditionally evaluated by using VA metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though there is a considerable number of objective VA metrics, there exists no study that validates that these metrics are adequate for the evaluation of VA models. This work constructs a VA Quality (VAQ) Database by subjectively assessing the prediction performance of VA models on distortion-free images. Additionally, shortcomings in existing metrics are discussed through illustrative examples and a new metric that uses local weights based on fixation density and that overcomes these flaws, is proposed. The proposed VA metric outperforms all other popular existing metrics in terms of the correlation with subjective ratings.
In practice, the image quality is affected by a host of factors at several stages of the image processing pipeline such as acquisition, compression, and transmission. However, none of the existing studies have discussed the subjective and objective evaluation of visual saliency models in the presence of distortion. In this work, a Distortion-based Visual Attention Quality (DVAQ) subjective database is constructed to evaluate the quality of VA maps for images in the presence of distortions. For creating this database, saliency maps obtained from images subjected to various types of distortions, including blur, noise and compression, and varying levels of distortion severity are rated by human observers in terms of their visual resemblance to corresponding ground-truth fixation density maps. The performance of traditionally used as well as recently proposed VA metrics are evaluated by correlating their scores with the human subjective ratings. In addition, an objective evaluation of 20 state-of-the-art VA models is performed using the top-performing VA metrics together with a study of how the VA models’ prediction performance changes with different types and levels of distortions.
Cornhole, traditionally seen as tailgate entertainment, has rapidly risen in popularity since the launching of the American Cornhole League in 2016. However, it lacks robust quality control over large tournaments, since many of the matches are scored and refereed by the players themselves. In the past, there have been issues where entire competition brackets have had to be scrapped and replayed because scores were not handled correctly. The sport is in need of a supplementary scoring solution that can provide quality control and accuracy over large matches where there aren’t enough referees present to score games. Drawing from the ACL regulations as well as personal experience and testimony from ACL Pro players, a list of requirements was generated for a potential automatic scoring system. Then, a market analysis of existing scoring solutions was done, and it found that there are no solutions on the market that can automatically score a cornhole game. Using the problem requirements and previous attempts to solve the scoring problem, a list of concepts was generated and evaluated against each other to determine which scoring system design should be developed. After determining that the chosen concept was the best way to approach the problem, the problem requirements and cornhole rules were further refined into a set of physical assumptions and constraints about the game itself. This informed the choice, structure, and implementation of the algorithms that score the bags. The prototype concept was tested on their own, and areas of improvement were found. Lastly, based on the results of the tests and what was learned from the engineering process, a roadmap was set out for the future development of the automatic scoring system into a full, market-ready product.
The robustness of a neural network is defined as the stability of the network output under small input perturbations. It has been shown that neural networks are very sensitive to input perturbations, and the prediction from convolutional neural networks can be totally different for input images that are visually indistinguishable to human eyes. Based on such property, hackers can reversely engineer the input to trick machine learning systems in targeted ways. These adversarial attacks have shown to be surprisingly effective, which has raised serious concerns over safety-critical applications like autonomous driving. In the meantime, many established defense mechanisms have shown to be vulnerable under more advanced attacks proposed later, and how to improve the robustness of neural networks is still an open question.
The generalizability of neural networks refers to the ability of networks to perform well on unseen data rather than just the data that they were trained on. Neural networks often fail to carry out reliable generalizations when the testing data is of different distribution compared with the training one, which will make autonomous driving systems risky under new environment. The generalizability of neural networks can also be limited whenever there is a scarcity of training data, while it can be expensive to acquire large datasets either experimentally or numerically for engineering applications, such as material and chemical design.
In this dissertation, we are thus motivated to improve the robustness and generalizability of neural networks. Firstly, unlike traditional bottom-up classifiers, we use a pre-trained generative model to perform top-down reasoning and infer the label information. The proposed generative classifier has shown to be promising in handling input distribution shifts. Secondly, we focus on improving the network robustness and propose an extension to adversarial training by considering the transformation invariance. Proposed method improves the robustness over state-of-the-art methods by 2.5% on MNIST and 3.7% on CIFAR-10. Thirdly, we focus on designing networks that generalize well at predicting physics response. Our physics prior knowledge is used to guide the designing of the network architecture, which enables efficient learning and inference. Proposed network is able to generalize well even when it is trained with a single image pair.