Filtering by
- All Subjects: Machine Learning
- Creators: Turaga, Pavan
Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.
transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
the habitual patterns of dancers from different backgrounds and vernaculars. Contextually,
the term habitual patterns is defined as the postures or poses that tend to re-appear,
often unintentionally, as the dancer performs improvisational dance. The focus lies in exposing
the movement vocabulary of a dancer to reveal his/her unique fingerprint.
The proposed approach for uncovering these movement patterns is to use a clustering
technique; mainly k-means. In addition to a static method of analysis, this paper uses
an online method of clustering using a streaming variant of k-means that integrates into
the flow of components that can be used in a real-time interactive dance performance. The
computational system is trained by the dancer to discover identifying patterns and therefore
it enables a feedback loop resulting in a rich exchange between dancer and machine. This
can help break a dancer’s tendency to create similar postures, explore larger kinespheric
space and invent movement beyond their current capabilities.
This paper describes a project that distinguishes itself in that it uses a custom database
that is curated for the purpose of highlighting the similarities and differences between various
movement forms. It puts particular emphasis on the process of choosing source movement
qualitatively, before the technological capture process begins.
Despite years of research, there are still some unsolved problems on semantic attribute learning. First, real-world applications usually involve hundreds of attributes which requires great effort to acquire sufficient amount of labeled data for model learning. Second, existing attribute learning work for visual objects focuses primarily on images, with semantic analysis on videos left largely unexplored.
In this dissertation I conduct innovative research and propose novel approaches to tackling the aforementioned problems. In particular, I propose robust and accurate learning frameworks on both attribute ranking and prediction by exploring the correlation among multiple attributes and utilizing various types of label information. Furthermore, I propose a video-based skill coaching framework by extending attribute learning to the video domain for robust motion skill analysis. Experiments on various types of applications and datasets and comparisons with multiple state-of-the-art baseline approaches confirm that my proposed approaches can achieve significant performance improvements for the general attribute learning problem.