Matching Items (5)
Filtering by

Clear all filters

136160-Thumbnail Image.png
Description
Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft

Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft Kinect. With the rise in popularity of these types of devices comes the increased amount of them in public areas. Public areas frequently use walk-up-and-use displays, which give many people the opportunity to interact with them. Walk-up-and-use displays are intended to be simple enough that any individual, regardless of experience using similar technology, will be able to successfully maneuver the system. While this should be easy enough for the people using it, it is a more complicated task for the designers who are in charge of creating an interface simple enough to use while also accomplishing the tasks it was built to complete. A serious issue that I'll be addressing in this thesis is how a system designer knows what gestures to program the interface to successfully respond to. Gesture elicitation is one widely used method to discover common, intuitive, gestures that can be used with public walk-up-and-use interactive displays. In this paper, I present a study to extract common intuitive gestures for various tasks, an analysis of the responses, and suggestions for future designs of interactive, public, walk-up-and use interactions.
ContributorsVan Horn, Sarah Elizabeth (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Economics Program in CLAS (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2015-05
137541-Thumbnail Image.png
Description
Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.
ContributorsJohnson, David Bradley (Author) / Calliss, Debra (Thesis director) / Wilkerson, Kelly (Committee member) / Walker, Erin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
134056-Thumbnail Image.png
Description
We propose the Bio-HCI framework, that focuses on three major components: biological materials, intermediate platforms, and interaction with the user. In this context, "biological materials" is meant to broadly cover biological matter (DNA, RNA, enzyme), biological information (gene, epigenetic), biological process (mutation, reproduction, self assembling), and biological form. These biological

We propose the Bio-HCI framework, that focuses on three major components: biological materials, intermediate platforms, and interaction with the user. In this context, "biological materials" is meant to broadly cover biological matter (DNA, RNA, enzyme), biological information (gene, epigenetic), biological process (mutation, reproduction, self assembling), and biological form. These biological materials serve as the design elements for designers to use in the same way as digital materials. Intermediate Platform focuses on methods of connecting biological materials to a user, or a digital platform that connect to users. In most current use-cases, biological materials need an intermediate platform to transfer the information to the user and transfer the user's response back to biological materials. Examples include a DNA sequencer, microscope, or petri dish. User interaction emphasizes the interactivity between a user and the biological machine (biological materials + intermediate platform). The interaction ranges from a basic human-computer interaction such as using a biological machine as a file storage to a unique interaction such as having a biological machine that evolves to solve user's task. To examine this framework further, we present four experiments which focus on the different aspect of the Bio-HCI framework.
ContributorsPataranutaporn, Pat (Author) / Finn, Edward (Thesis director) / Kusumi, Kenro (Committee member) / Ingalls, Todd (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
Description

My creative project is an extension of my Computer Science capstone project, a Tamagotchi-style game in which the user takes care of an ocean animal. It focuses specifically on expanding upon two of the project’s design goals: improving user retention and fostering a bond between the user and the virtual

My creative project is an extension of my Computer Science capstone project, a Tamagotchi-style game in which the user takes care of an ocean animal. It focuses specifically on expanding upon two of the project’s design goals: improving user retention and fostering a bond between the user and the virtual character they are taking care of. The project consists of researching Human Computer Interaction principles, selecting an assortment that are most relevant to my project, and integrating them into the design of mechanics for the game. The goal of this project is to demonstrate how integrating HCI design principles into game design can foster new ideas and improve the experience of the game for its users.

ContributorsSpence, Collin (Author) / Carter, Lynn (Thesis director) / Niebelschuetz, Malte (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor)
Created2023-05
158436-Thumbnail Image.png
Description
The burden of adaptation has been a major limiting factor in the adoption rates of new wearable assistive technologies. This burden has created a necessity for the exploration and combination of two key concepts in the development of upcoming wearables: anticipation and invisibility. The combination of these two topics has

The burden of adaptation has been a major limiting factor in the adoption rates of new wearable assistive technologies. This burden has created a necessity for the exploration and combination of two key concepts in the development of upcoming wearables: anticipation and invisibility. The combination of these two topics has created the field of Anticipatory and Invisible Interfaces (AII)

In this dissertation, a novel framework is introduced for the development of anticipatory devices that augment the proprioceptive system in individuals with neurodegenerative disorders in a seamless way that scaffolds off of existing cognitive feedback models. The framework suggests three main categories of consideration in the development of devices which are anticipatory and invisible:

• Idiosyncratic Design: How do can a design encapsulate the unique characteristics of the individual in the design of assistive aids?

• Adaptation to Intrapersonal Variations: As individuals progress through the various stages of a disability
eurological disorder, how can the technology adapt thresholds for feedback over time to address these shifts in ability?

• Context Aware Invisibility: How can the mechanisms of interaction be modified in order to reduce cognitive load?

The concepts proposed in this framework can be generalized to a broad range of domains; however, there are two primary applications for this work: rehabilitation and assistive aids. In preliminary studies, the framework is applied in the areas of Parkinsonian freezing of gait anticipation and the anticipation of body non-compliance during rehabilitative exercise.
ContributorsTadayon, Arash (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy (Committee member) / Krishnamurthi, Narayanan (Committee member) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2020