Filtering by
- All Subjects: Machine Learning
- Creators: Turaga, Pavan
![154471-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/154471-Thumbnail%20Image.png?versionId=5nKiaCdWwCMOzEZwb98KN162e4IDhwdM&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T023731Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=2eab70929eefd61276de8b3f78441cd20bb7e75363038a61ca4387fae09eceac&itok=oHgkTiMa)
![154384-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/154384-Thumbnail%20Image.png?versionId=Axr8pt_V7w8ZOKrA8BRQmbvs5jKz5C3R&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T033523Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=049b65c5e5146f33563933ac81fcf0e9cf1a445b18229c2f1075da8b65cb8879&itok=OGFPbo3B)
Approximately 1\% of the total world population are stroke survivors, making it the most common neurological disorder. This increasing demand for rehabilitation facilities has been seen as a significant healthcare problem worldwide. The laborious and expensive process of visual monitoring by physical therapists has motivated my research to invent novel strategies to supplement therapy received in hospital in a home-setting. In this direction, I propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system.
The rapid technological advancements in computing and sensing has resulted in large amounts of data which requires powerful tools to analyze. In the recent past, topological data analysis methods have been investigated in various communities, and the work by Carlsson establishes that persistent homology can be used as a powerful topological data analysis approach for effectively analyzing large datasets. I have explored suitable topological data analysis methods and propose a framework for human activity analysis utilizing the same for applications such as action recognition.
![155059-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/155059-Thumbnail%20Image.png?versionId=XKohmmxhqgCFaZ9d5Zox9gBEFV1i83cZ&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T011538Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=382f4de47ad3138d20b39758bc198e3cfe98176cd1c27e4007e34cdd82a5aa16&itok=3fZPTqHF)
![152840-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/152840-Thumbnail%20Image.png?versionId=euG7DM3AIByWxqZCQIUJKEWjpYFKGcFT&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T023325Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=2df9273fa3ad3497f30f2098234dc5160a4384c54b8442d213418401692d62f4&itok=n0_G6_2l)
![153394-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/153394-Thumbnail%20Image.png?versionId=Nx_3cNLSDLjxapZTN_i99X7.AiHeljod&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T040949Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=2feb7ab59b24135b80bc592be26cb432286f067c2305a07e7e284a3ff9ab9dbb&itok=I6NH1kDn)
![156084-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/156084-Thumbnail%20Image.png?versionId=1aMdmRxQU3NIJav6FzXyraQaajoDijyX&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T020622Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=a7860db25cd3c5123c6eb789dbe4c7c2d926af89ba586927b5be9bcb4b95ce0b&itok=waUcUOon)
The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.
In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
![158654-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/158654-Thumbnail%20Image.png?versionId=xTO1TG6t7I0bfbY523HO6mlYMBYRqcG_&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T040949Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=b2d4e26c9e46e154bf5b0c44d38a629327c7bb3b7bc33e560c2926e2a6aa130e&itok=jEP2d-vw)
In the context of common naturally occurring image distortions, a metric is proposed to identify the most susceptible DNN convolutional filters and rank them in order of the highest gain in classification accuracy upon correction. The proposed approach called DeepCorrect applies small stacks of convolutional layers with residual connections at the output of these ranked filters and trains them to correct the most distortion-affected filter activations, whilst leaving the rest of the pre-trained filter outputs in the network unchanged. Performance results show that applying DeepCorrect models for common vision tasks significantly improves the robustness of DNNs against distorted images and outperforms other alternative approaches.
In the context of universal adversarial perturbations, departing from existing defense strategies that work mostly in the image domain, a novel and effective defense which only operates in the DNN feature domain is presented. This approach identifies pre-trained convolutional features that are most vulnerable to adversarial perturbations and deploys trainable feature regeneration units which transform these DNN filter activations into resilient features that are robust to universal perturbations. Regenerating only the top 50% adversarially susceptible activations in at most 6 DNN layers and leaving all remaining DNN activations unchanged can outperform existing defense strategies across different network architectures and across various universal attacks.
![157840-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/157840-Thumbnail%20Image.png?versionId=.NPRWb8hy01avsC9_VioYVC0ecMwE9bV&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T005143Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=4955dba4726075c6b41355276fa4c6afbba96f0d4b0d47211576842c5bdd2ec6&itok=BJZG_m6O)
![158817-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/158817-Thumbnail%20Image.png?versionId=UdsVFocMXP1taaqfzhvs35xgKdySucu9&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T031357Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=4156f610d1303bc963223017ec344c02fe294eaf582d2b12a3939ccac27dc27f&itok=CIDsuKOz)
![161220-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-09/161220-Thumbnail%20Image.png?versionId=u.1JlHKgCbxK3jfB7CTwqOJ7lT6pGHnd&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T032031Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=a10a8b46cea1c1b52b81a9e076ec722290d8b9136136b166a460965ac6a217aa&itok=iozA_jtU)
Classification in machine learning is quite crucial to solve many problems that the world is presented with today. Therefore, it is key to understand one’s problem and develop an efficient model to achieve a solution. One technique to achieve greater model selection and thus further ease in problem solving is estimation of the Bayes Error Rate. This paper provides the development and analysis of two methods used to estimate the Bayes Error Rate on a given set of data to evaluate performance. The first method takes a “global” approach, looking at the data as a whole, and the second is more “local”—partitioning the data at the outset and then building up to a Bayes Error Estimation of the whole. It is found that one of the methods provides an accurate estimation of the true Bayes Error Rate when the dataset is at high dimension, while the other method provides accurate estimation at large sample size. This second conclusion, in particular, can have significant ramifications on “big data” problems, as one would be able to clarify the distribution with an accurate estimation of the Bayes Error Rate by using this method.