Matching Items (8)

134056-Thumbnail Image.png

Bio HCI Toward Interfaces between People, Computer, and Bio/digital system

Description

We propose the Bio-HCI framework, that focuses on three major components: biological materials, intermediate platforms, and interaction with the user. In this context, "biological materials" is meant to broadly cover

We propose the Bio-HCI framework, that focuses on three major components: biological materials, intermediate platforms, and interaction with the user. In this context, "biological materials" is meant to broadly cover biological matter (DNA, RNA, enzyme), biological information (gene, epigenetic), biological process (mutation, reproduction, self assembling), and biological form. These biological materials serve as the design elements for designers to use in the same way as digital materials. Intermediate Platform focuses on methods of connecting biological materials to a user, or a digital platform that connect to users. In most current use-cases, biological materials need an intermediate platform to transfer the information to the user and transfer the user's response back to biological materials. Examples include a DNA sequencer, microscope, or petri dish. User interaction emphasizes the interactivity between a user and the biological machine (biological materials + intermediate platform). The interaction ranges from a basic human-computer interaction such as using a biological machine as a file storage to a unique interaction such as having a biological machine that evolves to solve user's task. To examine this framework further, we present four experiments which focus on the different aspect of the Bio-HCI framework.

Contributors

Agent

Created

Date Created
  • 2017-12

137541-Thumbnail Image.png

INTERFACE DESIGN WITH MULTIPLE DEVICES IN MIND

Description

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.

Contributors

Agent

Created

Date Created
  • 2013-05

136160-Thumbnail Image.png

Intuitive Gesture Responses to Public Walk-Up-and-Use-Interactions

Description

Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces.

Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft Kinect. With the rise in popularity of these types of devices comes the increased amount of them in public areas. Public areas frequently use walk-up-and-use displays, which give many people the opportunity to interact with them. Walk-up-and-use displays are intended to be simple enough that any individual, regardless of experience using similar technology, will be able to successfully maneuver the system. While this should be easy enough for the people using it, it is a more complicated task for the designers who are in charge of creating an interface simple enough to use while also accomplishing the tasks it was built to complete. A serious issue that I'll be addressing in this thesis is how a system designer knows what gestures to program the interface to successfully respond to. Gesture elicitation is one widely used method to discover common, intuitive, gestures that can be used with public walk-up-and-use interactive displays. In this paper, I present a study to extract common intuitive gestures for various tasks, an analysis of the responses, and suggestions for future designs of interactive, public, walk-up-and use interactions.

Contributors

Agent

Created

Date Created
  • 2015-05

150293-Thumbnail Image.png

Zazzer: forming friendships on digital social networks

Description

Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social

Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social networks seem not to encourage it because either their purpose isn't compatible with introducing strangers or the prevalent methods of introduction aren't effective enough to merit use over real word alternatives. This paper presents a novel digital social network emphasizing creating friendships. Research has shown video chat communication can reach in-person levels of trust; coupled with a game environment to ease the discomfort people often have interacting with strangers and a recommendation engine, Zazzer, the presented system, allows people to meet and get to know each other in a manner much more true to real life than traditional methods. Its network also allows players to continue to communicate afterwards. The evaluation looks at real world use, measuring the frequency with which players choose the video chat game versus alternative, more traditional methods of online introduction. It also looks at interactions after the initial meeting to discover how effective video chat games are in creating sticky social connections. After initial use it became apparent a critical mass of users would be necessary to draw strong conclusions, however the collected data seemed to give preliminary support to the idea that video chat games are more effective than traditional ways of meeting online in creating new relationships.

Contributors

Agent

Created

Date Created
  • 2011

158436-Thumbnail Image.png

Anticipatory and Invisible Interfaces to Address Impaired Proprioception in Neurological Disorders

Description

The burden of adaptation has been a major limiting factor in the adoption rates of new wearable assistive technologies. This burden has created a necessity for the exploration and combination

The burden of adaptation has been a major limiting factor in the adoption rates of new wearable assistive technologies. This burden has created a necessity for the exploration and combination of two key concepts in the development of upcoming wearables: anticipation and invisibility. The combination of these two topics has created the field of Anticipatory and Invisible Interfaces (AII)

In this dissertation, a novel framework is introduced for the development of anticipatory devices that augment the proprioceptive system in individuals with neurodegenerative disorders in a seamless way that scaffolds off of existing cognitive feedback models. The framework suggests three main categories of consideration in the development of devices which are anticipatory and invisible:

• Idiosyncratic Design: How do can a design encapsulate the unique characteristics of the individual in the design of assistive aids?

• Adaptation to Intrapersonal Variations: As individuals progress through the various stages of a disability
eurological disorder, how can the technology adapt thresholds for feedback over time to address these shifts in ability?

• Context Aware Invisibility: How can the mechanisms of interaction be modified in order to reduce cognitive load?

The concepts proposed in this framework can be generalized to a broad range of domains; however, there are two primary applications for this work: rehabilitation and assistive aids. In preliminary studies, the framework is applied in the areas of Parkinsonian freezing of gait anticipation and the anticipation of body non-compliance during rehabilitative exercise.

Contributors

Agent

Created

Date Created
  • 2020

150848-Thumbnail Image.png

GALLAG strip: a mobile, programming with demonstration environment for sensor-based context-aware application programming

Description

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications. To accomplish this, the GALLAG system uses a combination of sensing technologies, remote audio/video feedback, mobile devices and an application programming interface (API) to empower users to create their own context-aware applications. However, the API requires programming through source code, a task that is too complicated and abstract for many users. This thesis presents GALLAG Strip, a novel approach to programming sensor-based context-aware applications that combines the Programming With Demonstration technique and a mobile device to enable users to experience their applications as they program them. GALLAG Strip lets users create sensor-based context-aware applications in an intuitive and appealing way without the need of computer programming skills; instead, they program their applications by physically demonstrating their envisioned interactions within a space using the same interface that they will later use to interact with the system, that is, using GALLAG-compatible sensors and mobile devices. GALLAG Strip was evaluated through a study with end users in a real world setting, measuring their ability to program simple and complex applications accurately and in a timely manner. The evaluation also comprises a benchmark with expert GALLAG system programmers in creating the same applications. Data and feedback collected from the study show that GALLAG Strip successfully allows users to create sensor-based context-aware applications easily and accurately without the need of prior programming skills currently required by the GALLAG system and enables them to create almost all of their envisioned applications.

Contributors

Agent

Created

Date Created
  • 2012

156121-Thumbnail Image.png

Improving Usability and Adoption of Tablet-based Electronic Health Record (EHR) Applications

Description

The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records

The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government to promote the usage of EHRs as they have been found to improve quality of health service. Although EHR systems have been implemented almost everywhere, active use of EHR applications have not replaced paper documentation. Rather, they are often used to store transcribed data from paper documentation after each clinical procedure. This process is found to be prone to errors such as data omission, incomplete data documentation and is also time consuming. This research aims to help improve adoption of real-time EHRs usage while documenting data by improving the usability of an iPad based EHR application that is used during resuscitation process in the intensive care unit. Using Cognitive theories and HCI frameworks, this research identified areas of improvement and customizations in the application that were required to exclusively match the work flow of the resuscitation team at the Mayo Clinic. In addition to this, a Handwriting Recognition Engine (HRE) was integrated into the application to support a stylus based information input into EHR, which resembles our target users’ traditional pen and paper based documentation process. The EHR application was updated and then evaluated with end users at the Mayo clinic. The users found the application to be efficient, usable and they showed preference in using this application over the paper-based documentation.

Contributors

Agent

Created

Date Created
  • 2018

153656-Thumbnail Image.png

A maker's mechanological paradigm: seeing experiential media systems as structurally determined

Description

Wittgenstein’s claim: anytime something is seen, it is necessarily seen as something, forms the philosophical foundation of this research. I synthesize theories and philosophies from Simondon, Maturana, Varela, Wittgenstein, Pye,

Wittgenstein’s claim: anytime something is seen, it is necessarily seen as something, forms the philosophical foundation of this research. I synthesize theories and philosophies from Simondon, Maturana, Varela, Wittgenstein, Pye, Sennett, and Reddy in a research process I identify as a paradigm construction project. My personal studio practice of inventing experiential media systems is a key part of this research and illustrates, with practical examples, my philosophical arguments from a range of points of observation. I see media systems as technical objects, and see technical objects as structurally determined systems, in which the structure of the system determines its organization. I identify making, the process of determining structure, as a form of structural coupling and see structural coupling as a means of knowing material. I introduce my theory of conceptual plurifunctionality as an extension to Simondon’s theory. Aspects of materiality are presented as a means of seeing material and immaterial systems, including cultural systems. I seek to answer the questions: How is structure seen as determining the organization of systems, and making seen as a process in which the resulting structures of technical objects and the maker are co-determined? How might an understanding of structure and organization be applied to the invention of contemporary experiential media systems?

Contributors

Agent

Created

Date Created
  • 2015