Matching Items (7)
153158-Thumbnail Image.png
Description
Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy

Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy to the home. Currently, there are a variety of approaches to INR design, which coupled with minimal large-scale clinical data, has led to a lack of cohesion in INR design. INR design presents an inherently complex space as these systems have multiple users including stroke survivors, therapists and designers, each with their own user experience needs. This dissertation proposes that comprehensive INR design, which can address this complex user space, requires and benefits from the application of interdisciplinary research that spans motor learning and interactive learning. A methodology for integrated and iterative design approaches to INR task experience, assessment, hardware, software and interactive training protocol design is proposed within the comprehensive example of design and implementation of a mixed reality rehabilitation system for minimally supervised environments. This system was tested with eight stroke survivors who showed promising results in both functional and movement quality improvement. The results of testing the system with stroke survivors as well as observing user experiences will be presented along with suggested improvements to the proposed design methodology. This integrative design methodology is proposed to have benefit for not only comprehensive INR design but also complex interactive system design in general.
ContributorsBaran, Michael (Author) / Rikakis, Thanassis (Thesis advisor) / Olson, Loren (Thesis advisor) / Wolf, Steven L. (Committee member) / Ingalls, Todd (Committee member) / Arizona State University (Publisher)
Created2014
Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
ContributorsSeth, Nayan Sateesh (Author) / Nelson, Brian (Thesis advisor) / Walker, Erin (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155844-Thumbnail Image.png
Description
Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality

Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries.

An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future.
ContributorsRathore, Yash K (Author) / Amor, Hani Ben (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155429-Thumbnail Image.png
Description
Emerging information and communication technology (ICT) has had an enormous effect on the building architecture, engineering, construction and operation (AECO) fields in recent decades. The effects have resonated in several disciplines, such as project information flow, design representation and communication, and Building Information Modeling (BIM) approaches. However, these effects can

Emerging information and communication technology (ICT) has had an enormous effect on the building architecture, engineering, construction and operation (AECO) fields in recent decades. The effects have resonated in several disciplines, such as project information flow, design representation and communication, and Building Information Modeling (BIM) approaches. However, these effects can potentially impact communication and coordination of the virtual design contents in both design and construction phases. Therefore, and with the great potential for emerging technologies in construction projects, it is essential to understand how these technologies influence virtual design information within the organizations as well as individuals’ behaviors. This research focusses on understanding current emerging technologies and its impacts on projects virtual design information and communication among projects stakeholders within the AECO organizations.
ContributorsAlsafouri, Suleiman (Author) / Ayer, Steven (Thesis advisor) / Tang, Pingbo (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
Description
Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s

Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s ARKit (Apple’s Augmented Reality Software Development Kit) are computationally expensive and have very slow refresh rates, which make them incompatible for dynamic environments and low-end mobile devices. Recently, there have been other illumination estimation frameworks such as GLEAM, Xihe, which aim at providing better illumination with faster refresh rates. GLEAM is an illumination estimation framework that understands the real scene by collecting pixel data from a reflecting spherical light probe. GLEAM uses this data to form environment cubemaps which are later mapped onto a reflection probe to generate illumination for AR objects. It is noticed that from a single viewpoint only one half of the light probe can be observed at a time which does not give complete information about the environment. This leads to the idea of having a multi-viewpoint estimation for better performance. This thesis work analyzes the multi-viewpoint capabilities of AR illumination frameworks that use physical light probes to understand the environment. The current work builds networking using TCP and UDP protocols on GLEAM. This thesis work also documents how processor load sharing has been done while networking devices and how that benefits the performance of GLEAM on mobile devices. Some enhancements using multi-threading have also been made to the already existing GLEAM model to improve its performance.
ContributorsGurram, Sahithi (Author) / LiKamWa, Robert (Thesis advisor) / Jayasuriya, Suren (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2022
151399-Thumbnail Image.png
Description
Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging

Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging multimodal feedback for self-assessment during a therapeutic task. The AMRR system was evaluated in a small (N=3) cohort of stroke survivors to determine best practices for administering adaptive, media-based therapy. A proof of concept study followed, examining changes in clinical scale and kinematic performances among a group of stroke survivors who received either a month of AMRR therapy (N = 11) or matched dosing of traditional repetitive task therapy (N = 10). Both groups demonstrated statistically significant improvements in Wolf Motor Function Test and upper-extremity Fugl-Meyer Assessment scores, indicating increased function after the therapy. However, only participants who received AMRR therapy showed a consistent improvement in their kinematic measurements, including those measured in the trained reaching task (reaching to grasp a cone) and in an untrained reaching task (reaching to push a lighted button). These results suggest that that the AMRR system can be used as a therapy tool to enhance both functionality and reaching kinematics that quantify movement quality. Additionally, the AMRR concepts are currently being transitioned to a home-based training application. An inexpensive, easy-to-use, toolkit of tangible objects has been developed to sense, assess and provide feedback on hand function during different functional activities. These objects have been shown to accurately and consistently track hand function in people with unimpaired movements and will be tested with stroke survivors in the future.
ContributorsDuff, Margaret Rose (Author) / Rikakis, Thanassis (Thesis advisor) / He, Jiping (Thesis advisor) / Herman, Richard (Committee member) / Kleim, Jeffrey (Committee member) / Santos, Veronica (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2012
157685-Thumbnail Image.png
Description
Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can serve as an effective tool for alleviating

these misconceptions by

Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can serve as an effective tool for alleviating

these misconceptions by comparing the change in the number of misconceptions

expressed by users of a tablet-based version of a well-established classroom simulation to

the change in the number of misconceptions expressed by users of AR versions of the

simulation.

The use of realistic representations of objects is common for many AR

developers. However, this contradicts well-tested practices of multimedia design that

argue against the addition of unnecessary elements. This study also compared the use of

representational visualizations in AR, in this case, models of ladybug beetles, to symbolic

representations, in this case, colored circles.

To address both research questions, a one-factor, between-subjects experiment

was conducted with 189 participants randomly assigned to one of three conditions: non

AR, symbolic AR, and representational AR. Measures of change in the number and types

of misconceptions expressed, motivation, and time on task were examined using a pair of

planned orthogonal contrasts designed to test the study’s two research questions.

Participants in the AR-based condition showed a significantly smaller change in

the number of total misconceptions expressed after the treatment as well as in the number

of misconceptions related to intentionality; none of the other misconceptions examined

showed a significant difference. No significant differences were found in the total

number of misconceptions expressed between participants in the representative and

symbolic AR-based conditions, or on motivation. Contrary to the expectation that the

simulation would alleviate misconceptions, the average change in the number of

misconceptions expressed by participants increased. This is theorized to be due to the

juxtaposition of virtual and real-world entities resulting in a reduction in assumed

intentionality.
ContributorsHenry, Matthew McClellan (Author) / Atkinson, Robert K (Thesis advisor) / Johnson-Glenberg, Mina C (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2019