Matching Items (3)
Filtering by

Clear all filters

136160-Thumbnail Image.png
Description
Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft

Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft Kinect. With the rise in popularity of these types of devices comes the increased amount of them in public areas. Public areas frequently use walk-up-and-use displays, which give many people the opportunity to interact with them. Walk-up-and-use displays are intended to be simple enough that any individual, regardless of experience using similar technology, will be able to successfully maneuver the system. While this should be easy enough for the people using it, it is a more complicated task for the designers who are in charge of creating an interface simple enough to use while also accomplishing the tasks it was built to complete. A serious issue that I'll be addressing in this thesis is how a system designer knows what gestures to program the interface to successfully respond to. Gesture elicitation is one widely used method to discover common, intuitive, gestures that can be used with public walk-up-and-use interactive displays. In this paper, I present a study to extract common intuitive gestures for various tasks, an analysis of the responses, and suggestions for future designs of interactive, public, walk-up-and use interactions.
ContributorsVan Horn, Sarah Elizabeth (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Economics Program in CLAS (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2015-05
Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
ContributorsSeth, Nayan Sateesh (Author) / Nelson, Brian (Thesis advisor) / Walker, Erin (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155829-Thumbnail Image.png
Description
Electronic books or eBooks have the potential to revolutionize the way humans read and learn. eBooks offer many advantages such as simplicity, ease of use, eco-friendliness, and portability. The advancement of technology has introduced many forms of multimedia objects into eBooks, which may help people learn from them. To hel

Electronic books or eBooks have the potential to revolutionize the way humans read and learn. eBooks offer many advantages such as simplicity, ease of use, eco-friendliness, and portability. The advancement of technology has introduced many forms of multimedia objects into eBooks, which may help people learn from them. To help the readers understand and comprehend a concept that is put forward by the author of an eBook, there is ongoing research involving the use of augmented reality (AR) in education. This study explores how AR and three-dimensional interactive models are integrated into eBooks to help the readers comprehend the content quickly and swiftly. It compares the reading activities of people when they experience these two visual representations within an eBook.

This study required participants to interact with some instructional material presented on an eBook and complete a learning measure. While interacting with the eBook, participants were equipped with a set of physiological devices, namely an ABM EEG headset and eye tracker during the experiment to collect biometric data that could be used to objectively measure their user experience. Fifty college students participated in this study. The data collected from each of the participants was used to analyze the reading activities of people by performing an Independent Samples t-test.
ContributorsJuluru, Kalyan Kumar (Author) / Atkinson, Robert K. (Thesis advisor) / Chen, Yinong (Thesis advisor) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2017