Matching Items (7)
Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
153158-Thumbnail Image.png
Description
Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy

Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy to the home. Currently, there are a variety of approaches to INR design, which coupled with minimal large-scale clinical data, has led to a lack of cohesion in INR design. INR design presents an inherently complex space as these systems have multiple users including stroke survivors, therapists and designers, each with their own user experience needs. This dissertation proposes that comprehensive INR design, which can address this complex user space, requires and benefits from the application of interdisciplinary research that spans motor learning and interactive learning. A methodology for integrated and iterative design approaches to INR task experience, assessment, hardware, software and interactive training protocol design is proposed within the comprehensive example of design and implementation of a mixed reality rehabilitation system for minimally supervised environments. This system was tested with eight stroke survivors who showed promising results in both functional and movement quality improvement. The results of testing the system with stroke survivors as well as observing user experiences will be presented along with suggested improvements to the proposed design methodology. This integrative design methodology is proposed to have benefit for not only comprehensive INR design but also complex interactive system design in general.
ContributorsBaran, Michael (Author) / Rikakis, Thanassis (Thesis advisor) / Olson, Loren (Thesis advisor) / Wolf, Steven L. (Committee member) / Ingalls, Todd (Committee member) / Arizona State University (Publisher)
Created2014
Description
As digital technology promises immediacy and interactivity in communication, sight and sound in motion graphics has expanded the range of design possibilities in advertising, social networking, and telecommunication beyond the visual realm. The experience of seeing has been greatly enriched by sound as visual solutions become dynamic and multi-dimensional. The

As digital technology promises immediacy and interactivity in communication, sight and sound in motion graphics has expanded the range of design possibilities in advertising, social networking, and telecommunication beyond the visual realm. The experience of seeing has been greatly enriched by sound as visual solutions become dynamic and multi-dimensional. The ability to record and transfer sight and sound with new media has granted the designer more control in manipulating a viewer's experience of time and space. This control allows time-based form to become the foundation that establishes many interactive, multisensory and interdisciplinary applications. Is conventional design theory for print media adequate to effectively approach time-based form? If not, what is the core element that is required to balance the static and dynamic aspects of time in new media? Should time-related theories and methodologies from other disciplines be adopted into our design principles? If so, how would this knowledge be integrated? How can this experience in time be effectively transferred to paper? Unless the role of the time dimension in sight is operationally deconstructed and retained with sound, it is very challenging to control the design in this fugitive form. Time activation refers to how time and the perception of time can be manipulated for design and communication purposes. Sound, as a shortcut to the active time design element, not only encapsulates the structure of its "invisible" time-based form, but also makes changes in time conspicuously measurable and comparable. Two experiments reflect the influence of sound on imagery, a slideshow and video, as well as how the dynamics in time are represented across all design media. A cyclical time-based model is established to reconnect the conventional design principles learned in print media with time-based media. This knowledge helps expand static images to motion and encapsulate motion in stasis. The findings provide creative methods for approaching visualization, interactivity, and design education.
ContributorsCheung, Hoi Yan Patrick (Author) / Giard, Jacques (Thesis advisor) / Sanft, Alfred C (Committee member) / Aisling, Kelliher (Committee member) / Arizona State University (Publisher)
Created2011
136085-Thumbnail Image.png
Description
This century has brought about incredible advancements in technology and academia, changing the workforce and the future leaders that will drive it: students. However, the integration of digital literacy and digital tools in many United States K\u201412 schools is often overlooked. Through "Exploring the Digital World," students, parents, and teachers

This century has brought about incredible advancements in technology and academia, changing the workforce and the future leaders that will drive it: students. However, the integration of digital literacy and digital tools in many United States K\u201412 schools is often overlooked. Through "Exploring the Digital World," students, parents, and teachers can follow the creatures of this story-driven program as they learn the importance of digital literacy in the 21st century.
ContributorsRaiton, Joseph Michael (Author) / Fehler, Michelle (Thesis director) / Heywood, William (Committee member) / Barrett, The Honors College (Contributor) / The Design School (Contributor)
Created2015-05
Description

This project is intended to fill gaps in the professional knowledge of music educators in the state of Arizona concerning the pedagogy, content, and importance of a visual education program in the scholastic marching band. It also aims to contribute to the general pool of knowledge surrounding visual education. While

This project is intended to fill gaps in the professional knowledge of music educators in the state of Arizona concerning the pedagogy, content, and importance of a visual education program in the scholastic marching band. It also aims to contribute to the general pool of knowledge surrounding visual education. While music educators are often expected to begin teaching marching band immediately following their graduation, many do not ever receive proper training in the visual aspect of the marching arts. The marching band is the most visible element of a holistic educational music program, and often represents the school to the community and the educator to their administrators. While significant music training is given at the collegiate level, many educators have not had further experience in the marching arts. The author uses his experience in Drum Corps International, as well as in teaching marching band to synthesize research-based practices into a handbook of immediately applicable visual pedagogical information that would be immediately useful to any music educator.

ContributorsGerald, Thomas (Author) / Swoboda, Deanna (Thesis director) / Quamo, Jeff (Committee member) / Barrett, The Honors College (Contributor) / School of Music, Dance and Theatre (Contributor)
Created2023-05
171832-Thumbnail Image.png
Description
Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly

Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly for these tasks. However, such geometric methods require extreme fine-tuning and extensive prior knowledge to set up these systems for different scenarios. Classical Geometric approaches also require significant post-processing and optimization to minimize the error between the estimated pose and the global truth. In this body of work, the deep learning model was formed by combining SuperPoint and SuperGlue. The resulting model does not require any prior fine-tuning. It has been trained to enable both outdoor and indoor settings. The proposed deep learning model is applied to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset along with other classical geometric visual odometry models. The proposed deep learning model has not been trained on the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. It is only during experimentation that the deep learning model is first introduced to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. Using the monocular grayscale images from the visual odometer files of the Karlsruhe Institute of Technology and Toyota Technological Institute dataset, through the experiment to test the viability of the models for different sequences. The experiment has been performed on eight different sequences and has obtained the Absolute Trajectory Error and the time taken for each sequence to finish the computation. From the obtained results, there are inferences drawn from the classical and deep learning approaches.
ContributorsVaidyanathan, Venkatesh (Author) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Arizona State University (Publisher)
Created2022
Description
In this study, the role of attention in facial expression processing is investigated, especially as it relates to fearful facial expressions compared to happy facial expressions. Facial fear processing plays a critical role in human social interactions and survival, and this has previously been studied mainly in animal models. This

In this study, the role of attention in facial expression processing is investigated, especially as it relates to fearful facial expressions compared to happy facial expressions. Facial fear processing plays a critical role in human social interactions and survival, and this has previously been studied mainly in animal models. This study, however, was accomplished with the presentation of images of actors with happy and fearful facial expressions in three spatial frequency formats, as it is hypothesized that images at different spatial frequencies may be processed via different pathways. These images were presented to human participants in two experiments. In Experiment I, facial expression was task-relevant as participants were asked to discriminate between “happy” and “fear” expressions with reaction time (measured in seconds) and accuracy recorded. In Experiment II, facial expression was task-irrelevant, as participants were asked simply to discriminate between photographs of males and females, again with reaction time and accuracy recorded. Overall, the results comparing happy and fearful facial expressions in Experiment I were not significant. The results comparing happy and fearful facial expressions in Experiment II exhibited similar insignificant results except for accuracy in certain spatial frequencies, which were found to be significant. These results suggest that fearful facial expressions are processed more accurately than happy facial expressions when attention is focused on other variables in the image rather than when attention is focused on the facial expressions themselves.
ContributorsMcMaster, Hope (Author) / Bae, Gi-Yeul (Thesis director) / Corbin, William (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor) / Department of Psychology (Contributor)
Created2024-05