Matching Items (39)
Filtering by

Clear all filters

151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
Description

Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with

Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.

ContributorsLi, Vincent (Author) / Turaga, Pavan (Thesis director) / Buman, Matthew (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
189297-Thumbnail Image.png
Description
This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other

This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other words, the complex architecture and millions of parameters present challenges in finding the right balance between capturing useful patterns and avoiding noise in the data. To address these issues, this thesis explores novel solutions based on knowledge distillation, enabling the learning of robust representations. Leveraging the capabilities of large-scale networks, effective learning strategies are developed. Moreover, the limitations of dependency on external networks in the distillation process, which often require large-scale models, are effectively overcome by proposing a self-distillation strategy. The proposed approach empowers the model to generate high-level knowledge within a single network, pushing the boundaries of knowledge distillation. The effectiveness of the proposed method is not only demonstrated across diverse applications, including image classification, object detection, and semantic segmentation but also explored in practical considerations such as handling data scarcity and assessing the transferability of the model to other learning tasks. Another major obstacle hindering the development of reliable and robust models lies in their black-box nature, impeding clear insights into the contributions toward the final predictions and yielding uninterpretable feature representations. To address this challenge, this thesis introduces techniques that incorporate simple yet powerful deep constraints rooted in Riemannian geometry. These constraints confer geometric qualities upon the latent representation, thereby fostering a more interpretable and insightful representation. In addition to its primary focus on general tasks like image classification and activity recognition, this strategy offers significant benefits in real-world applications where data scarcity is prevalent. Moreover, its robustness in feature removal showcases its potential for edge applications. By successfully tackling these challenges, this research contributes to advancing the field of machine learning and provides a foundation for building more reliable and robust systems across various application domains.
ContributorsChoi, Hongjun (Author) / Turaga, Pavan (Thesis advisor) / Jayasuriya, Suren (Committee member) / Li, Wenwen (Committee member) / Fazli, Pooyan (Committee member) / Arizona State University (Publisher)
Created2023
Description

This study measure the effect of temperature on a neural network's ability to detect and classify solar panel faults. It's well known that temperature negatively affects the power output of solar panels. This has consequences on their output data and our ability to distinguish between conditions via machine learning.

ContributorsVerch, Skyler (Author) / Spanias, Andreas (Thesis director) / Tepedelenlioğlu, Cihan (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-12
Description

Although relatively new technology, machine learning has rapidly demonstrated its many uses. One potential application of machine learning is the diagnosis of ailments in medical imaging. Ideally, through classification methods, a computer program would be able to identify different medical conditions when provided with an X-ray or other such scan.

Although relatively new technology, machine learning has rapidly demonstrated its many uses. One potential application of machine learning is the diagnosis of ailments in medical imaging. Ideally, through classification methods, a computer program would be able to identify different medical conditions when provided with an X-ray or other such scan. This would be very beneficial for overworked doctors, and could act as a potential crutch to aid in giving accurate diagnoses. For this thesis project, five different machine-learning algorithms were tested on two datasets containing 5,856 lung X-ray scans labeled as either “Pneumonia” or “Normal”. The goal was to determine which algorithm achieved the highest accuracy, as well as how preprocessing the data affected the accuracy of the models. The following supervised-learning methods were tested: support vector machines, logistic regression, decision trees, random forest, and a convolutional neural network. Each model was adjusted independently in order to achieve maximum performance before accuracy metrics were generated to pit the models against each other. Additionally, the effect of resizing images on model performance was investigated. Overall, a convolutional neural network proved to be the superior model for pneumonia detection, with a 91% accuracy. After resizing to 28x28, CNN accuracy decreased to 85%. The random forest model performed second best. The 28x28 PneumoniaMNIST dataset achieved higher accuracy using traditional machine learning models than the HD Chest X-Ray dataset. Resizing the Chest X-ray images had minimal effect on traditional model performance when resized to 28x28 or larger.

ContributorsVollkommer, Margie (Author) / Spanias, Andreas (Thesis director) / Sivaraman Narayanaswamy, Vivek (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05
Description

We present in this paper a method to compare scene classification accuracy of C-band Synthetic aperture radar (SAR) and optical images utilizing both classical and quantum computing algorithms. This REU study uses data from the Sentinel satellite. The dataset contains (i) synthetic aperture radar images collected from the Sentinel-1 satellite

We present in this paper a method to compare scene classification accuracy of C-band Synthetic aperture radar (SAR) and optical images utilizing both classical and quantum computing algorithms. This REU study uses data from the Sentinel satellite. The dataset contains (i) synthetic aperture radar images collected from the Sentinel-1 satellite and (ii) optical images for the same area as the SAR images collected from the Sentinel-2 satellite. We utilize classical neural networks to classify four classes of images. We then use Quantum Convolutional Neural Networks and deep learning techniques to take advantage of machine learning to help the system train, learn, and identify at a higher classification accuracy. A hybrid Quantum-classical model that is trained on the Sentinel1-2 dataset is proposed, and the performance is then compared against the classical in terms of classification accuracy.

ContributorsMiller, Leslie (Author) / Spanias, Andreas (Thesis director) / Uehara, Glen (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2023-05
164864-Thumbnail Image.png
Description

Quantum computing is an emerging and promising alternative to classical computing due to its ability to perform rapidly complex computations in a parallel manner. In this thesis, we aim to design an audio classification algorithm using a hybrid quantum-classical neural network. The thesis concentrated on healthcare applications and focused specifically

Quantum computing is an emerging and promising alternative to classical computing due to its ability to perform rapidly complex computations in a parallel manner. In this thesis, we aim to design an audio classification algorithm using a hybrid quantum-classical neural network. The thesis concentrated on healthcare applications and focused specifically on COVID-19 cough sound classification. All machine learning algorithms developed or implemented in this study were trained using features from Log Mel Spectrograms of healthy and COVID-19 coughing audio. Results are first presented from a study in which an ensemble of a VGG13, CRNN, GCNN, and GCRNN are utilized to classify audio using classical computing. Then, improved results attained using an optimized VGG13 neural network are presented. Finally, our quantum-classical hybrid neural network is designed and assessed in terms of accuracy and number of quantum layers and qubits. Comparisons are made to classical recurrent and convolutional neural networks.

ContributorsEsposito, Michael (Author) / Spanias, Andreas (Thesis director) / Uehara, Glen (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor)
Created2022-05
164938-Thumbnail Image.png
Description

In wireless communication systems, the process of data transmission includes the estimation of channels. Implementing machine learning in this process can reduce the amount of time it takes to estimate channels, thus, resulting in an increase of the system’s transmission throughput. This maximizes the performance of applications relating to device-to-device

In wireless communication systems, the process of data transmission includes the estimation of channels. Implementing machine learning in this process can reduce the amount of time it takes to estimate channels, thus, resulting in an increase of the system’s transmission throughput. This maximizes the performance of applications relating to device-to-device communications and 5G systems. However, applying machine learning algorithms to multi-base-station systems is not well understood in literature, which is the focus of this thesis.

ContributorsCosio, Karla (Author) / Ewaisha, Ahmed (Thesis director) / Spanias, Andreas (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05