Matching Items (34)

131386-Thumbnail Image.png

Input-Elicitation Methods for Crowdsourced Human Computation

Description

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.

Contributors

Agent

Created

Date Created
2020-05

The Future of Brain-Computer Interaction: A Potential Brain-Aiding Device of the Future

Description

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.

Contributors

Agent

Created

Date Created
2017-05

127818-Thumbnail Image.png

A Call to Action: Embodied Thinking and Human-Computer Interaction Design

Description

This chapter is not a guide to embodied thinking, but rather a critical call to action. It highlights the deep history of embodied practice within the fields of dance and somatics, and outlines the value of embodied thinking within human-computer

This chapter is not a guide to embodied thinking, but rather a critical call to action. It highlights the deep history of embodied practice within the fields of dance and somatics, and outlines the value of embodied thinking within human-computer interaction (HCI) design and, more specifically, wearable technology (WT) design. What this chapter does not do is provide a guide or framework for embodied practice. As a practitioner and scholar grounded in the fields of dance and somatics, I argue that a guide to embodiment cannot be written in a book. To fully understand embodied thinking, one must act, move, and do. Terms such as embodiment and embodied thinking are often discussed and analyzed in writing; but if the purpose is to learn how to engage in embodied thinking, then the answers will not come from a text. The answers come from movement-based exploration, active trial-and-error, and improvisation practices crafted to cultivate physical attunement to one's own body. To this end, my "call to action" is for the reader to move beyond a text-based understanding of embodiment to active engagement in embodied methodologies. Only then, I argue, can one understand how to apply embodied thinking to a design process.

Contributors

Agent

Created

Date Created
2018

135938-Thumbnail Image.png

Mobile User Interface for Palliative Care Patients

Description

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily drains the hospital’s resources, and it prevents the patient’s physician from applying specialized care that would better suit the patient’s individual needs. This scenario is detrimental to all involved. A mobile medical application seeks to foster doctor-patient communication while simultaneously decreasing the frequency of these excessive E.R. visits. In order to provide a sufficient standard of usefulness and convenience, the design of such a mobile application must be tailored to accommodate the needs of palliative care patients. Palliative care is focused on establishing long-term comfort for people who are often terminally-ill, elderly, handicapped, or otherwise severely disadvantaged. Therefore, a UI intended for palliative care patients must be devoted to simplicity and ease of use. The application must also be robust enough that the user feels that they have been provided with enough capabilities. The majority of this paper is dedicated to overhauling an existing palliative care application, the product of a previous honors thesis project, and implementing a user interface that establishes a simple, positive, and advantageous environment. This is accomplished through techniques such as color-coding, optimizing page layout, increasing customization capabilities, and more. Above all else, this user interface is intended to make the patient’s experience satisfying and trouble-free. They should be able to log in, navigate the application’s features with a few taps of their finger, and log out — all without undergoing any frustration or difficulties.

Contributors

Agent

Created

Date Created
2015-12

136153-Thumbnail Image.png

Efficient Gestures In Users' Preference, Health, And Natural Inclination For Non-Touch-Based Interface

Description

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.

Contributors

Created

Date Created
2015-05

137541-Thumbnail Image.png

INTERFACE DESIGN WITH MULTIPLE DEVICES IN MIND

Description

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers.

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.

Contributors

Agent

Created

Date Created
2013-05

152310-Thumbnail Image.png

We built this town: raising activity awareness through the workplace using gamification

Description

The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the

The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now commonplace to see large, distributed multidisciplinary teams working together on a daily basis. However, in this environment, motivating, understanding, and valuing the diverse contributions of individual workers in collaborative enterprises becomes challenging. To address these issues, this thesis presents the goals, design, and implementation of Taskville, a distributed workplace game played by teams on large, public displays. Taskville uses a city building metaphor to represent the completion of individual and group tasks within an organization. Promising results from two usability studies and two longitudinal studies at a multidisciplinary school demonstrate that Taskville supports personal reflection and improves team awareness through an engaging workplace activity.

Contributors

Agent

Created

Date Created
2013

150848-Thumbnail Image.png

GALLAG strip: a mobile, programming with demonstration environment for sensor-based context-aware application programming

Description

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications. To accomplish this, the GALLAG system uses a combination of sensing technologies, remote audio/video feedback, mobile devices and an application programming interface (API) to empower users to create their own context-aware applications. However, the API requires programming through source code, a task that is too complicated and abstract for many users. This thesis presents GALLAG Strip, a novel approach to programming sensor-based context-aware applications that combines the Programming With Demonstration technique and a mobile device to enable users to experience their applications as they program them. GALLAG Strip lets users create sensor-based context-aware applications in an intuitive and appealing way without the need of computer programming skills; instead, they program their applications by physically demonstrating their envisioned interactions within a space using the same interface that they will later use to interact with the system, that is, using GALLAG-compatible sensors and mobile devices. GALLAG Strip was evaluated through a study with end users in a real world setting, measuring their ability to program simple and complex applications accurately and in a timely manner. The evaluation also comprises a benchmark with expert GALLAG system programmers in creating the same applications. Data and feedback collected from the study show that GALLAG Strip successfully allows users to create sensor-based context-aware applications easily and accurately without the need of prior programming skills currently required by the GALLAG system and enables them to create almost all of their envisioned applications.

Contributors

Agent

Created

Date Created
2012

137645-Thumbnail Image.png

Is interactive computation a superset of Turing computation?

Description

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. —

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. — cannot model these behaviors completely. Some additional conceptual apparatus is required in order to model processes of interactive computation.

Contributors

Created

Date Created
2013-05

153937-Thumbnail Image.png

Equating user experience and Fitts law in gesture based input modalities

Description

The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based

The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to three gesture based input devices that utilize different technologies and two baseline devices, mouse and touchscreen. Three target distances and three target sizes were tested six times in a randomized order with a randomized order of the five input technologies. A total of 81 participants’ data were collected for the within subjects design study. Participants were instructed to perform the task as quickly and accurately as possible according to traditional Fitts’ testing procedures. Movement time, error rate, and throughput for each input technology were calculated.

Additionally, no standards exist for equating user experience with Fitts’ measures such as movement time, throughput, and error count. To test the hypothesis that a user’s experience can be predicted using Fitts’ measures of movement time, throughput and error count, an ease of use rating using a 5-point scale for each input type was collected from each participant. The calculated Mean Opinion Scores (MOS) were regressed on Fitts’ measures of movement time, throughput, and error count to understand the extent to which they can predict a user’s subjective rating.

Contributors

Agent

Created

Date Created
2015