Matching Items (33)

127818-Thumbnail Image.png

A Call to Action: Embodied Thinking and Human-Computer Interaction Design

Description

This chapter is not a guide to embodied thinking, but rather a critical call to action. It highlights the deep history of embodied practice within the fields of dance and

This chapter is not a guide to embodied thinking, but rather a critical call to action. It highlights the deep history of embodied practice within the fields of dance and somatics, and outlines the value of embodied thinking within human-computer interaction (HCI) design and, more specifically, wearable technology (WT) design. What this chapter does not do is provide a guide or framework for embodied practice. As a practitioner and scholar grounded in the fields of dance and somatics, I argue that a guide to embodiment cannot be written in a book. To fully understand embodied thinking, one must act, move, and do. Terms such as embodiment and embodied thinking are often discussed and analyzed in writing; but if the purpose is to learn how to engage in embodied thinking, then the answers will not come from a text. The answers come from movement-based exploration, active trial-and-error, and improvisation practices crafted to cultivate physical attunement to one's own body. To this end, my "call to action" is for the reader to move beyond a text-based understanding of embodiment to active engagement in embodied methodologies. Only then, I argue, can one understand how to apply embodied thinking to a design process.

Contributors

Agent

Created

Date Created
  • 2018

The Future of Brain-Computer Interaction: A Potential Brain-Aiding Device of the Future

Description

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.

Contributors

Agent

Created

Date Created
  • 2017-05

135018-Thumbnail Image.png

Voice Reconfigurable Networks

Description

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year has seen the rise of smart home assistants that represent the next step in human-computer interaction with their advanced use of natural language processing. This project seeks to quell the issues with the former by exploring a possible fusion of a powerful, feature-rich software-defined networking stack and the incredible natural language processing tools of smart home assistants. To accomplish these ends, a piece of software was developed to leverage the powerful natural language processing capabilities of one such smart home assistant, the Amazon Echo. On one end, this software interacts with Amazon Web Services to retrieve information about a user's speech patterns and key information contained in their speech. On the other end, the software joins that information with its previous session state to intelligently translate speech into a series of commands for the separate components of a networking stack. The software developed for this project empowers a user to quickly make changes to several facets of their networking gear or acquire information about it with just their language \u2014 no terminals, java applets, or web configuration interfaces needed, thus circumventing clunky UI's or jumping from shell to shell. It is the author's hope that showing how networking equipment can be configured in this innovative way will draw more attention to the current failings of networking equipment and inspire a new series of intuitive user interfaces.

Contributors

Agent

Created

Date Created
  • 2016-12

135938-Thumbnail Image.png

Mobile User Interface for Palliative Care Patients

Description

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily drains the hospital’s resources, and it prevents the patient’s physician from applying specialized care that would better suit the patient’s individual needs. This scenario is detrimental to all involved. A mobile medical application seeks to foster doctor-patient communication while simultaneously decreasing the frequency of these excessive E.R. visits. In order to provide a sufficient standard of usefulness and convenience, the design of such a mobile application must be tailored to accommodate the needs of palliative care patients. Palliative care is focused on establishing long-term comfort for people who are often terminally-ill, elderly, handicapped, or otherwise severely disadvantaged. Therefore, a UI intended for palliative care patients must be devoted to simplicity and ease of use. The application must also be robust enough that the user feels that they have been provided with enough capabilities. The majority of this paper is dedicated to overhauling an existing palliative care application, the product of a previous honors thesis project, and implementing a user interface that establishes a simple, positive, and advantageous environment. This is accomplished through techniques such as color-coding, optimizing page layout, increasing customization capabilities, and more. Above all else, this user interface is intended to make the patient’s experience satisfying and trouble-free. They should be able to log in, navigate the application’s features with a few taps of their finger, and log out — all without undergoing any frustration or difficulties.

Contributors

Agent

Created

Date Created
  • 2015-12

131386-Thumbnail Image.png

Input-Elicitation Methods for Crowdsourced Human Computation

Description

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.

Contributors

Agent

Created

Date Created
  • 2020-05

137645-Thumbnail Image.png

Is interactive computation a superset of Turing computation?

Description

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. — cannot model these behaviors completely. Some additional conceptual apparatus is required in order to model processes of interactive computation.

Contributors

Created

Date Created
  • 2013-05

137541-Thumbnail Image.png

INTERFACE DESIGN WITH MULTIPLE DEVICES IN MIND

Description

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.

Contributors

Agent

Created

Date Created
  • 2013-05

136153-Thumbnail Image.png

Efficient Gestures In Users' Preference, Health, And Natural Inclination For Non-Touch-Based Interface

Description

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.

Contributors

Created

Date Created
  • 2015-05

153204-Thumbnail Image.png

Mere exposure effect on uncanny feelings toward virtual characters and robots

Description

As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to

As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to determine whether uncanny feelings from modern virtual characters and robots can be significantly affected by the mere exposure effect. Previous research shows that mere exposure can increase positive feelings toward novel stimuli (Zajonc, 1968). It is predicted that the repeated exposure to virtual characters and robots can cause a significant decrease in uncanny feelings. The current study aimed to show that modern virtual characters and robots possessing uncanny traits will be rated significantly less uncanny after being viewed multiple times.

Contributors

Agent

Created

Date Created
  • 2014

153137-Thumbnail Image.png

A tool for empathetic user experience design

Description

Study in user experience design states that there is a considerable gap between users and designers. Collaborative design and empathetic design methods attempt to make a strong relationship between these

Study in user experience design states that there is a considerable gap between users and designers. Collaborative design and empathetic design methods attempt to make a strong relationship between these two. In participatory design activities, projective `make tools' are required for users to show their thoughts. This research is designed to apply an empathetic way of using `make tools' in user experience design for websites clients, users, and designers.

A magnetic wireframe tool has been used as a `make tool', and a sample project has been defined in order to see how the tool can create empathy among stakeholders. In this study fourth year graphic design students at Arizona State University (ASU), USA, are participating as users, faculty members have the role of clients, and Forty, Inc., a design firm in the Phoenix area, is the design team for the study. All of these three groups are cooperating on re-designing the homepage of the Design School in Herberger Institute for Design and Art (HIDA) at ASU.

A method for applying the magnetic tool was designed and used for each group. Results of users and clients' activities were shared with the design team, and they designed a final prototype for the wireframe of the sample project. Observation and interviews were done to see how participants work with the tool. Also, follow up questionnaires were used in order to evaluate all groups' experiences with the magnetic wireframe. Lastly, as a part of questionnaires, a sentence completion method has been used in order to collect the participants' exact thoughts about the magnetic tool.

Observations and results of data analysis in this research show that the tool was a helpful `make tool' for users and clients. They could talk about their ideas and also designers could learn more about people. The entire series of activities caused an empathetic relationship among stakeholders of the sample project. This method of using `make tools' in user experience design for web sites can be useful for collaborative UX design activities and further research in user experience design with empathy.

Contributors

Agent

Created

Date Created
  • 2014