Matching Items (10)
Filtering by

Clear all filters

151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152003-Thumbnail Image.png
Description
We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such

We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such incentivization schemes require the system to verify the claim made by the user. The system verifies these claims by analyzing the supporting evidence captured by the user while performing the activity. The proliferation of portable smart-phones in the past few years has provided us with a ubiquitous and relatively cheap platform, having multiple sensors like accelerometer, gyroscope, microphone etc. to capture this evidence data in-situ. In this research, we investigate the supervised and semi-supervised learning techniques for activity verification. Both these techniques make use the data set constructed using the evidence submitted by the user. Supervised learning makes use of annotated evidence data to build a function to predict the class labels of the unlabeled data points. The evidence data captured can be either unimodal or multimodal in nature. We use the accelerometer data as evidence for transportation mode verification and image data as evidence for recycling verification. After training the system, we achieve maximum accuracy of 94% when classifying the transport mode and 81% when detecting recycle activity. In the case of recycle verification, we could improve the classification accuracy by asking the user for more evidence. We present some techniques to ask the user for the next best piece of evidence that maximizes the probability of classification. Using these techniques for detecting recycle activity, the accuracy increases to 93%. The major disadvantage of using supervised models is that it requires extensive annotated training data, which expensive to collect. Due to the limited training data, we look at the graph based inductive semi-supervised learning methods to propagate the labels among the unlabeled samples. In the semi-supervised approach, we represent each instance in the data set as a node in the graph. Since it is a complete graph, edges interconnect these nodes, with each edge having some weight representing the similarity between the points. We propagate the labels in this graph, based on the proximity of the data points to the labeled nodes. We estimate the performance of these algorithms by measuring how close the probability distribution of the data after label propagation is to the probability distribution of the ground truth data. Since labeling has a cost associated with it, in this thesis we propose two algorithms that help us in selecting minimum number of labeled points to propagate the labels accurately. Our proposed algorithm achieves a maximum of 73% increase in performance when compared to the baseline algorithm.
ContributorsDesai, Vaishnav (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151383-Thumbnail Image.png
Description
Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera -

Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera - Kinect is presented. We address this problem by first conducting a systematic analysis of the usability of Kinect for motion analysis in stroke rehabilitation. Then a hybrid upper body tracking approach is proposed which combines off-the-shelf skeleton tracking with a novel depth-fused mean shift tracking method. We proposed several kinematic features reliably extracted from the proposed inexpensive and portable motion capture system and classifiers that correlate torso movement to clinical measures of unimpaired and impaired. Experiment results show that the proposed sensing and analysis works reliably on measuring torso movement quality and is promising for end-point tracking. The system is currently being deployed for large-scale evaluations.
ContributorsDu, Tingfang (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Rikakis, Thanassis (Committee member) / Arizona State University (Publisher)
Created2012
131599-Thumbnail Image.png
Description
A meta-synthesis consisting of 10 research studies exploring the perspectives of adolescents receiving psychiatric treatment in relation to treatment adherence was conducted. Current literature indicates several factors contributing to partial or non-adherence to pharmacologic or non-pharmacologic treatment, as well as a need for further research to be conducted. Adolescents are

A meta-synthesis consisting of 10 research studies exploring the perspectives of adolescents receiving psychiatric treatment in relation to treatment adherence was conducted. Current literature indicates several factors contributing to partial or non-adherence to pharmacologic or non-pharmacologic treatment, as well as a need for further research to be conducted. Adolescents are a particularly vulnerable population to mental health conditions. Often symptoms of mental health conditions are present during childhood and adolescence, though they are not addressed until adulthood. Early intervention and prevention of the worsening of symptoms increases the likelihood of positive health outcomes. It is imperative that nursing staff understand the experience of this population in order to provide patient-centered care. Literature was thoroughly searched using the terms 'qualitative', 'adolescents', 'adherence', and 'psychiatric'. The following databases were used during the literature search: PubMed, PsycINFO, and CINAHL. Noblit and Hare’s 1988 comparative method of synthesizing qualitative studies guided the inquiry. Collectively, the 10 studies yielded a sample size of 415 participants. Overarching themes were generated to reflect the patient experience of adolescents receiving mental health care services. The themes identified were autonomy, ostrification, therapeutic intervention, and identity. The theme of autonomy related to the adolescents’ desire to control their care and treatment plan. In regards to ostrification, several adolescents reported feeling isolated during treatment. Therapeutic intervention related to the variety of factors that influenced an adolescent’s commitment to pharmacologic and non-pharmacologic treatment. Identity referred to adolescents’ struggle with self-concept after being diagnosed with a mental health condition. It is noted that variation was present throughout the studies identified to meet inclusionary criteria, and these variations were expressed within the findings.
ContributorsShepard, Reganne (Author) / Fries, Kathleen (Thesis director) / Walker, Beth (Committee member) / Edson College of Nursing and Health Innovation (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
DescriptionImprimatur is a collection of poems written by Sophia Guerriero and edited through workshops with Phoenix Poet Laureate Rosemarie Dombrowski and Melissa Tramuta. The book includes pieces that reflect on identity, the self, institutions like religion and relationships, and overall social commentary rooted in the concept of perspective.
ContributorsCrevelt, Sophia (Author) / Dombrowski, Rosemarie (Thesis director) / Tramuta, Melissa (Committee member) / Barrett, The Honors College (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor)
Created2023-12
Description

In this thesis, I explored the interconnected ways in which human experience can shape and be shaped by environments of the future, such as interactive environments and spaces, embedded with sensors, enlivened by advanced algorithms for sensor data processing. I have developed an abstract representational experience into the vast and

In this thesis, I explored the interconnected ways in which human experience can shape and be shaped by environments of the future, such as interactive environments and spaces, embedded with sensors, enlivened by advanced algorithms for sensor data processing. I have developed an abstract representational experience into the vast and continual journey through life that shapes how we can use sensory immersion. The experimental work was housed in the iStage: an advanced black box space in the School of Arts, Media, and Engineering, which consists of video cameras, motion capture systems, spatial audio systems, and controllable lighting and projector systems. The malleable and interactive space of the iStage transformed into a reflective tool in which to gain insight into the overall shared, but very individual, emotional odyssey. Additionally, I surveyed participants after engaging in the experience to better understand their perceptions and interpretations of the experience. With the responses of participants' experiences and collective reflection upon the project I can begin to think about future iterations and how they might contain applications in health and/or wellness.

ContributorsHaagen, Jordan (Author) / Turaga, Pavan (Thesis director) / Drummond Otten, Caitlin (Committee member) / Barrett, The Honors College (Contributor) / Arts, Media and Engineering Sch T (Contributor) / School of Human Evolution & Social Change (Contributor)
Created2022-05
152840-Thumbnail Image.png
Description
Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many

Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many of such sparse learning methods focus on designing or application of some learning techniques for certain feature space without much explicit consideration on possible interaction between the underlying semantics of the visual data and the employed learning technique. Rich semantic information in most visual data, if properly incorporated into algorithm design, should help achieving improved performance while delivering intuitive interpretation of the algorithmic outcomes. My study addresses the problem of how to explicitly consider the semantic information of the visual data in the sparse learning algorithms. In this work, we identify four problems which are of great importance and broad interest to the community. Specifically, a novel approach is proposed to incorporate label information to learn a dictionary which is not only reconstructive but also discriminative; considering the formation process of face images, a novel image decomposition approach for an ensemble of correlated images is proposed, where a subspace is built from the decomposition and applied to face recognition; based on the observation that, the foreground (or salient) objects are sparse in input domain and the background is sparse in frequency domain, a novel and efficient spatio-temporal saliency detection algorithm is proposed to identify the salient regions in video; and a novel hidden Markov model learning approach is proposed by utilizing a sparse set of pairwise comparisons among the data, which is easier to obtain and more meaningful, consistent than tradition labels, in many scenarios, e.g., evaluating motion skills in surgical simulations. In those four problems, different types of semantic information are modeled and incorporated in designing sparse learning algorithms for the corresponding visual computing tasks. Several real world applications are selected to demonstrate the effectiveness of the proposed methods, including, face recognition, spatio-temporal saliency detection, abnormality detection, spatio-temporal interest point detection, motion analysis and emotion recognition. In those applications, data of different modalities are involved, ranging from audio signal, image to video. Experiments on large scale real world data with comparisons to state-of-art methods confirm the proposed approaches deliver salient advantages, showing adding those semantic information dramatically improve the performances of the general sparse learning methods.
ContributorsZhang, Qiang (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2014
155774-Thumbnail Image.png
Description
In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the

In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the emergence of novel imagers called spatial-multiplexing cameras, which offer compression at the sensing level itself by providing an arbitrary linear measurements of the scene instead of pixel-based sampling. In this dissertation, I discuss various approaches for effective information extraction from spatial-multiplexing measurements and present the trade-offs between reliability of the performance and computational/storage load of the system. In the first part, I present a reconstruction-free approach to high-level inference in computer vision, wherein I consider the specific case of activity analysis, and show that using correlation filters, one can perform effective action recognition and localization directly from a class of spatial-multiplexing cameras, called compressive cameras, even at very low measurement rates of 1\%. In the second part, I outline a deep learning based non-iterative and real-time algorithm to reconstruct images from compressively sensed (CS) measurements, which can outperform the traditional iterative CS reconstruction algorithms in terms of reconstruction quality and time complexity, especially at low measurement rates. To overcome the limitations of compressive cameras, which are operated with random measurements and not particularly tuned to any task, in the third part of the dissertation, I propose a method to design spatial-multiplexing measurements, which are tuned to facilitate the easy extraction of features that are useful in computer vision tasks like object tracking. The work presented in the dissertation provides sufficient evidence to high-level inference in computer vision at extremely low measurement rates, and hence allows us to think about the possibility of revamping the current day computer systems.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Li, Baoxin (Committee member) / Chakrabarti, Chaitali (Committee member) / Sankaranarayanan, Aswin (Committee member) / LiKamWa, Robert (Committee member) / Arizona State University (Publisher)
Created2017
Description
Tracking moving objects with code isn’t a new concept. There are many computer vision libraries that have functions that can track changes in position very accurately. This allows for computers to be able to provide data about situations that aren’t able to be observed in a reasonable amount of time.

Tracking moving objects with code isn’t a new concept. There are many computer vision libraries that have functions that can track changes in position very accurately. This allows for computers to be able to provide data about situations that aren’t able to be observed in a reasonable amount of time. For example, tracking hundreds of moving cars over a day would take a lot of time if done by hand, but with code, one can get that data quicker. This thesis aims to provide a clear, simple, and effective application to track moving objects in a given video, trace their paths, and color-code these paths to see which ones are the most congested. This is to provide an efficient and deployable algorithm to track moving objects. This research was in collaboration with Moog Inc, an aerospace and defense company, to develop an algorithm that would analyze a video of a parking lot and determine the empty parking spaces and the common traffic paths that cars take while in a parking lot. Moog Inc. provides an Optimized Development Environment (ODE) to develop the application. Since the hardware is efficient on power and has a small form factor, the applications that are run on it are very easily deployable and portable, which makes it useful for any environment. The process of tracking cars in a video is somewhat straightforward as well. It consists of filtering the video, drawing rectangles around each region (car), tracing their paths (movements) and applying a heatmap to that path. Since it isn’t too computationally intensive, it can work well on the ODE. Since the ODE is small and has a portable form factor, this algorithm can be deployed fairly easily. The heatmap generation was effective in showing the densities of certain paths that cars traveled through. There are also various colormaps that can be used, to provide a clearer idea of the paths. There were attempts to optimize this algorithm by processing every other frame instead, but ultimately the tradeoff between efficiency and accuracy was deemed to be unfavorable. There were still some limitations that this approach had, as initially the algorithm would draw paths between areas that weren’t traversed by cars. While this was fixed in the final result, there are still some slight inaccuracies within the roads. There are also ethical concerns with the use of this software, as Moog Inc. does a lot of work in defense and this software could be used in wartime scenarios. However, this software can be applied to various other scenarios like tracking wildlife in an area to study their habits, or tracking particles to see their density in a given environment. Since the algorithm is ran on a low-powered environment, it can be deployed and tested in many different scenarios without being costly.
ContributorsChandra, Rohan (Author) / Chavez Echeagaray, Maria Elena (Thesis director) / Rieckmann, Tyron (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05