Filtering by
- All Subjects: Machine Learning
- Creators: Barrett, The Honors College
- Creators: Turaga, Pavan
- Member of: Theses and Dissertations
- Resource Type: Text
The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.
In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
The built environment is responsible for a significant portion of global waste generation.
Construction and demolition (C&D) waste requires significant landfill areas and costs
billions of dollars. New business models that reduce this waste may prove to be financially
beneficial and generally more sustainable. One such model is referred to as the “Circular
Economy” (CE), which promotes the efficient use of materials to minimize waste
generation and raw material consumption. CE is achieved by maximizing the life of
materials and components and by reclaiming the typically wasted value at the end of their
life. This thesis identifies the potential opportunities for using CE in the built environment.
It first calculates the magnitude of C&D waste and its main streams, highlights the top
C&D materials based on weight and value using data from various regions, identifies the
top C&D materials’ current recycling and reuse rates, and finally estimates a potential
financial benefit of $3.7 billion from redirecting C&D waste using the CE concept in the
United States.
Leveraging Machine Learning and Wireless Sensing for Robot Localization - Location Variance Analysis
Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.
As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.