Matching Items (39)
Filtering by

Clear all filters

157840-Thumbnail Image.png
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is

Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
ContributorsLohit, Suhas Anand (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
ContributorsKoneripalli Seetharam, Kaushik (Author) / Turaga, Pavan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
158817-Thumbnail Image.png
Description
Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units.

Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units. When designing and training machine learning systems, researchers often assume access to large quantities of data that capture different possible variations. Variations in the data is needed to incorporate desired invariance and robustness properties in the machine learning system, especially in the case of deep learning algorithms. However, it is very difficult to gather such data in a real-world setting. For example, in certain medical/healthcare applications, it is very challenging to have access to data from all possible scenarios or with the necessary amount of variations as required to train the system. Additionally, the over-parameterized and unconstrained nature of deep neural networks can cause them to be poorly trained and in many cases over-confident which, in turn, can hamper their reliability and generalizability. This dissertation is a compendium of my research efforts to address the above challenges. I propose building invariant feature representations by wedding concepts from topological data analysis and Riemannian geometry, that automatically incorporate the desired invariance properties for different computer vision applications. I discuss how deep learning can be used to address some of the common challenges faced when working with topological data analysis methods. I describe alternative learning strategies based on unsupervised learning and transfer learning to address issues like dataset shifts and limited training data. Finally, I discuss my preliminary work on applying simple orthogonal constraints on deep learning feature representations to help develop more reliable and better calibrated models.
ContributorsSom, Anirudh (Author) / Turaga, Pavan (Thesis advisor) / Krishnamurthi, Narayanan (Committee member) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2020
158890-Thumbnail Image.png
Description
Open Design is a crowd-driven global ecosystem which tries to challenge and alter contemporary modes of capitalistic hardware production. It strives to build on the collective skills, expertise and efforts of people regardless of their educational, social or political backgrounds to develop and disseminate physical products, machines and systems. In

Open Design is a crowd-driven global ecosystem which tries to challenge and alter contemporary modes of capitalistic hardware production. It strives to build on the collective skills, expertise and efforts of people regardless of their educational, social or political backgrounds to develop and disseminate physical products, machines and systems. In contrast to capitalistic hardware production, Open Design practitioners publicly share design files, blueprints and knowhow through various channels including internet platforms and in-person workshops. These designs are typically replicated, modified, improved and reshared by individuals and groups who are broadly referred to as ‘makers’.

This dissertation aims to expand the current scope of Open Design within human-computer interaction (HCI) research through a long-term exploration of Open Design’s socio-technical processes. I examine Open Design from three perspectives: the functional—materials, tools, and platforms that enable crowd-driven open hardware production, the critical—materially-oriented engagements within open design as a site for sociotechnical discourse, and the speculative—crowd-driven critical envisioning of future hardware.

More specifically, this dissertation first explores the growing global scene of Open Design through a long-term ethnographic study of the open science hardware (OScH) movement, a genre of Open Design. This long-term study of OScH provides a focal point for HCI to deeply understand Open Design's growing global landscape. Second, it examines the application of Critical Making within Open Design through an OScH workshop with designers, engineers, artists and makers from local communities. This work foregrounds the role of HCI researchers as facilitators of collaborative critical engagements within Open Design. Third, this dissertation introduces the concept of crowd-driven Design Fiction through the development of a publicly accessible online Design Fiction platform named Dream Drones. Through a six month long development and a study with drone related practitioners, it offers several pragmatic insights into the challenges and opportunities for crowd-driven Design Fiction. Through these explorations, I highlight the broader implications and novel research pathways for HCI to shape and be shaped by the global Open Design movement.
ContributorsFernando, Kattak Kuttige Rex Piyum (Author) / Kuznetsov, Anastasia (Thesis advisor) / Turaga, Pavan (Committee member) / Middel, Ariane (Committee member) / Takamura, John (Committee member) / Arizona State University (Publisher)
Created2020
158896-Thumbnail Image.png
Description
Cameras have become commonplace with wide-ranging applications of phone photography, computer vision, and medical imaging. With a growing need to reduce size and costs while maintaining image quality, the need to look past traditional style of cameras is becoming more apparent. Several non-traditional cameras have shown to be promising options

Cameras have become commonplace with wide-ranging applications of phone photography, computer vision, and medical imaging. With a growing need to reduce size and costs while maintaining image quality, the need to look past traditional style of cameras is becoming more apparent. Several non-traditional cameras have shown to be promising options for size-constraint applications, and while they may offer several advantages, they also usually are limited by image quality degradation due to optical or a need to reconstruct a captured image. In this thesis, we take a look at three of these non-traditional cameras: a pinhole camera, a diffusion-mask lensless camera, and an under-display camera (UDC).

For each of these cases, I present a feasible image restoration pipeline to correct for their particular limitations. For the pinhole camera, I present an early pipeline to allow for practical pinhole photography by reducing noise levels caused by low-light imaging, enhancing exposure levels, and sharpening the blur caused by the pinhole. For lensless cameras, we explore a neural network architecture that performs joint image reconstruction and point spread function (PSF) estimation to robustly recover images captured with multiple PSFs from different cameras. Using adversarial learning, this approach achieves improved reconstruction results that do not require explicit knowledge of the PSF at test-time and shows an added improvement in the reconstruction model’s ability to generalize to variations in the camera’s PSF. This allows lensless cameras to be utilized in a wider range of applications that require multiple cameras without the need to explicitly train a separate model for each new camera. For UDCs, we utilize a multi-stage approach to correct for low light transmission, blur, and haze. This pipeline uses a PyNET deep neural network architecture to perform a majority of the restoration, while additionally using a traditional optimization approach which is then fused in a learned manner in the second stage to improve high-frequency features. I show results from this novel fusion approach that is on-par with the state of the art.
ContributorsRego, Joshua D (Author) / Jayasuriya, Suren (Thesis advisor) / Blain Christen, Jennifer (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
161804-Thumbnail Image.png
Description
The field of Computer Vision has seen great accomplishments in the last decade due to the advancements in Deep Learning. With the advent of Convolutional Neural Networks, the task of image classification has achieved unimaginable success when perceived through the traditional Computer Vision lens. With that being said, the

The field of Computer Vision has seen great accomplishments in the last decade due to the advancements in Deep Learning. With the advent of Convolutional Neural Networks, the task of image classification has achieved unimaginable success when perceived through the traditional Computer Vision lens. With that being said, the state-of-the-art results in the image classification task were produced under a closed set assumption i.e. the input samples and the target datasets have knowledge of class labels in the testing phase. When any real-world scenario is considered, the model encounters unknown instances in the data. The task of identifying these unknown instances is called Open-Set Classification. This dissertation talks about the detection of unknown classes and the classification of the known classes. The problem is approached by using a neural network architecture called Deep Hierarchical Reconstruction Nets (DHRNets). It is dealt with by leveraging the reconstruction part of the DHRNets to identify the known class labels from the data. Experiments were also conducted on Convolutional Neural Networks (CNN) on the basis of softmax probability, Autoencoders on the basis of reconstruction loss, and Mahalanobis distance on CNN's to approach this problem.
ContributorsAinala, Kalyan (Author) / Turaga, Pavan (Thesis advisor) / Moraffah, Bahman (Committee member) / Demakethepalli Venkateswara, Hemanth Kumar (Committee member) / Arizona State University (Publisher)
Created2021
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
162001-Thumbnail Image.png
Description
Floating trash objects are very commonly seen on water bodies such as lakes, canals and rivers. With the increase of plastic goods and human activities near the water bodies, these trash objects can pile up and cause great harm to the surrounding environment. Using human workers to clear out these

Floating trash objects are very commonly seen on water bodies such as lakes, canals and rivers. With the increase of plastic goods and human activities near the water bodies, these trash objects can pile up and cause great harm to the surrounding environment. Using human workers to clear out these trash is a hazardous and time-consuming task. Employing autonomous robots for these tasks is a better approach since it is more efficient and faster than humans. However, for a robot to clean the trash objects, a good detection algorithm is required. Real-time object detection on water surfaces is a challenging issue due to nature of the environment and the volatility of the water surface. In addition to this, running an object detection algorithm on an on-board processor of a robot limits the amount of CPU consumption that the algorithm can utilize. In this thesis, a computationally low cost object detection approach for robust detection of trash objects that was run on an on-board processor of a multirotor is presented. To account for specular reflections on the water surface, we use a polarization filter and integrate a specularity removal algorithm on our approach as well. The challenges faced during testing and the means taken to eliminate those challenges are also discussed. The algorithm was compared with two other object detectors using 4 different metrics. The testing was carried out using videos of 5 different objects collected at different illumination conditions over a lake using a multirotor. The results indicate that our algorithm is much suitable to be employed in real-time since it had the highest processing speed of 21 FPS, the lowest CPU consumption of 37.5\% and considerably high precision and recall values in detecting the object.
ContributorsSyed, Danish Faraaz (Author) / Zhang, Wenlong (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2021
Description
Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s

Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s ARKit (Apple’s Augmented Reality Software Development Kit) are computationally expensive and have very slow refresh rates, which make them incompatible for dynamic environments and low-end mobile devices. Recently, there have been other illumination estimation frameworks such as GLEAM, Xihe, which aim at providing better illumination with faster refresh rates. GLEAM is an illumination estimation framework that understands the real scene by collecting pixel data from a reflecting spherical light probe. GLEAM uses this data to form environment cubemaps which are later mapped onto a reflection probe to generate illumination for AR objects. It is noticed that from a single viewpoint only one half of the light probe can be observed at a time which does not give complete information about the environment. This leads to the idea of having a multi-viewpoint estimation for better performance. This thesis work analyzes the multi-viewpoint capabilities of AR illumination frameworks that use physical light probes to understand the environment. The current work builds networking using TCP and UDP protocols on GLEAM. This thesis work also documents how processor load sharing has been done while networking devices and how that benefits the performance of GLEAM on mobile devices. Some enhancements using multi-threading have also been made to the already existing GLEAM model to improve its performance.
ContributorsGurram, Sahithi (Author) / LiKamWa, Robert (Thesis advisor) / Jayasuriya, Suren (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2022