Matching Items (3)
Filtering by

Clear all filters

Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
Description
As digital technology promises immediacy and interactivity in communication, sight and sound in motion graphics has expanded the range of design possibilities in advertising, social networking, and telecommunication beyond the visual realm. The experience of seeing has been greatly enriched by sound as visual solutions become dynamic and multi-dimensional. The

As digital technology promises immediacy and interactivity in communication, sight and sound in motion graphics has expanded the range of design possibilities in advertising, social networking, and telecommunication beyond the visual realm. The experience of seeing has been greatly enriched by sound as visual solutions become dynamic and multi-dimensional. The ability to record and transfer sight and sound with new media has granted the designer more control in manipulating a viewer's experience of time and space. This control allows time-based form to become the foundation that establishes many interactive, multisensory and interdisciplinary applications. Is conventional design theory for print media adequate to effectively approach time-based form? If not, what is the core element that is required to balance the static and dynamic aspects of time in new media? Should time-related theories and methodologies from other disciplines be adopted into our design principles? If so, how would this knowledge be integrated? How can this experience in time be effectively transferred to paper? Unless the role of the time dimension in sight is operationally deconstructed and retained with sound, it is very challenging to control the design in this fugitive form. Time activation refers to how time and the perception of time can be manipulated for design and communication purposes. Sound, as a shortcut to the active time design element, not only encapsulates the structure of its "invisible" time-based form, but also makes changes in time conspicuously measurable and comparable. Two experiments reflect the influence of sound on imagery, a slideshow and video, as well as how the dynamics in time are represented across all design media. A cyclical time-based model is established to reconnect the conventional design principles learned in print media with time-based media. This knowledge helps expand static images to motion and encapsulate motion in stasis. The findings provide creative methods for approaching visualization, interactivity, and design education.
ContributorsCheung, Hoi Yan Patrick (Author) / Giard, Jacques (Thesis advisor) / Sanft, Alfred C (Committee member) / Aisling, Kelliher (Committee member) / Arizona State University (Publisher)
Created2011
171832-Thumbnail Image.png
Description
Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly

Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly for these tasks. However, such geometric methods require extreme fine-tuning and extensive prior knowledge to set up these systems for different scenarios. Classical Geometric approaches also require significant post-processing and optimization to minimize the error between the estimated pose and the global truth. In this body of work, the deep learning model was formed by combining SuperPoint and SuperGlue. The resulting model does not require any prior fine-tuning. It has been trained to enable both outdoor and indoor settings. The proposed deep learning model is applied to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset along with other classical geometric visual odometry models. The proposed deep learning model has not been trained on the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. It is only during experimentation that the deep learning model is first introduced to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. Using the monocular grayscale images from the visual odometer files of the Karlsruhe Institute of Technology and Toyota Technological Institute dataset, through the experiment to test the viability of the models for different sequences. The experiment has been performed on eight different sequences and has obtained the Absolute Trajectory Error and the time taken for each sequence to finish the computation. From the obtained results, there are inferences drawn from the classical and deep learning approaches.
ContributorsVaidyanathan, Venkatesh (Author) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Arizona State University (Publisher)
Created2022