This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 3 of 3
Filtering by

Clear all filters

Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
155085-Thumbnail Image.png
Description
High-level inference tasks in video applications such as recognition, video retrieval, and zero-shot classification have become an active research area in recent years. One fundamental requirement for such applications is to extract high-quality features that maintain high-level information in the videos.

Many video feature extraction algorithms have been purposed, such

High-level inference tasks in video applications such as recognition, video retrieval, and zero-shot classification have become an active research area in recent years. One fundamental requirement for such applications is to extract high-quality features that maintain high-level information in the videos.

Many video feature extraction algorithms have been purposed, such as STIP, HOG3D, and Dense Trajectories. These algorithms are often referred to as “handcrafted” features as they were deliberately designed based on some reasonable considerations. However, these algorithms may fail when dealing with high-level tasks or complex scene videos. Due to the success of using deep convolution neural networks (CNNs) to extract global representations for static images, researchers have been using similar techniques to tackle video contents. Typical techniques first extract spatial features by processing raw images using deep convolution architectures designed for static image classifications. Then simple average, concatenation or classifier-based fusion/pooling methods are applied to the extracted features. I argue that features extracted in such ways do not acquire enough representative information since videos, unlike images, should be characterized as a temporal sequence of semantically coherent visual contents and thus need to be represented in a manner considering both semantic and spatio-temporal information.

In this thesis, I propose a novel architecture to learn semantic spatio-temporal embedding for videos to support high-level video analysis. The proposed method encodes video spatial and temporal information separately by employing a deep architecture consisting of two channels of convolutional neural networks (capturing appearance and local motion) followed by their corresponding Fully Connected Gated Recurrent Unit (FC-GRU) encoders for capturing longer-term temporal structure of the CNN features. The resultant spatio-temporal representation (a vector) is used to learn a mapping via a Fully Connected Multilayer Perceptron (FC-MLP) to the word2vec semantic embedding space, leading to a semantic interpretation of the video vector that supports high-level analysis. I evaluate the usefulness and effectiveness of this new video representation by conducting experiments on action recognition, zero-shot video classification, and semantic video retrieval (word-to-video) retrieval, using the UCF101 action recognition dataset.
ContributorsHu, Sheng-Hung (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Liang, Jianming (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2016
155774-Thumbnail Image.png
Description
In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the

In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the emergence of novel imagers called spatial-multiplexing cameras, which offer compression at the sensing level itself by providing an arbitrary linear measurements of the scene instead of pixel-based sampling. In this dissertation, I discuss various approaches for effective information extraction from spatial-multiplexing measurements and present the trade-offs between reliability of the performance and computational/storage load of the system. In the first part, I present a reconstruction-free approach to high-level inference in computer vision, wherein I consider the specific case of activity analysis, and show that using correlation filters, one can perform effective action recognition and localization directly from a class of spatial-multiplexing cameras, called compressive cameras, even at very low measurement rates of 1\%. In the second part, I outline a deep learning based non-iterative and real-time algorithm to reconstruct images from compressively sensed (CS) measurements, which can outperform the traditional iterative CS reconstruction algorithms in terms of reconstruction quality and time complexity, especially at low measurement rates. To overcome the limitations of compressive cameras, which are operated with random measurements and not particularly tuned to any task, in the third part of the dissertation, I propose a method to design spatial-multiplexing measurements, which are tuned to facilitate the easy extraction of features that are useful in computer vision tasks like object tracking. The work presented in the dissertation provides sufficient evidence to high-level inference in computer vision at extremely low measurement rates, and hence allows us to think about the possibility of revamping the current day computer systems.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Li, Baoxin (Committee member) / Chakrabarti, Chaitali (Committee member) / Sankaranarayanan, Aswin (Committee member) / LiKamWa, Robert (Committee member) / Arizona State University (Publisher)
Created2017