Matching Items (8)
Filtering by

Clear all filters

136153-Thumbnail Image.png
Description
Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.
ContributorsLam, Christine (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Barrett, The Honors College (Contributor) / Ira A. Fulton School of Engineering (Contributor) / School of Arts, Media and Engineering (Contributor) / Department of English (Contributor) / Computing and Informatics Program (Contributor)
Created2015-05
137541-Thumbnail Image.png
Description
Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.
ContributorsJohnson, David Bradley (Author) / Calliss, Debra (Thesis director) / Wilkerson, Kelly (Committee member) / Walker, Erin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
Description
Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.
ContributorsThum, Giuseppe Edwardo (Author) / Gaffar, Ashraf (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135018-Thumbnail Image.png
Description
The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year has seen the rise of smart home assistants that represent the next step in human-computer interaction with their advanced use of natural language processing. This project seeks to quell the issues with the former by exploring a possible fusion of a powerful, feature-rich software-defined networking stack and the incredible natural language processing tools of smart home assistants. To accomplish these ends, a piece of software was developed to leverage the powerful natural language processing capabilities of one such smart home assistant, the Amazon Echo. On one end, this software interacts with Amazon Web Services to retrieve information about a user's speech patterns and key information contained in their speech. On the other end, the software joins that information with its previous session state to intelligently translate speech into a series of commands for the separate components of a networking stack. The software developed for this project empowers a user to quickly make changes to several facets of their networking gear or acquire information about it with just their language \u2014 no terminals, java applets, or web configuration interfaces needed, thus circumventing clunky UI's or jumping from shell to shell. It is the author's hope that showing how networking equipment can be configured in this innovative way will draw more attention to the current failings of networking equipment and inspire a new series of intuitive user interfaces.
ContributorsHermens, Ryan Joseph (Author) / Meuth, Ryan (Thesis director) / Burger, Kevin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
137645-Thumbnail Image.png
Description
Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. — cannot model these behaviors completely. Some additional conceptual apparatus is

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. — cannot model these behaviors completely. Some additional conceptual apparatus is required in order to model processes of interactive computation.
ContributorsThomas, Nicholas Woodlief (Author) / Armendt, Brad (Thesis director) / Kobes, Bernard (Committee member) / Blackson, Thomas (Committee member) / Barrett, The Honors College (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Psychology (Contributor)
Created2013-05
135938-Thumbnail Image.png
Description
Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily drains the hospital’s resources, and it prevents the patient’s physician from applying specialized care that would better suit the patient’s individual needs. This scenario is detrimental to all involved. A mobile medical application seeks to foster doctor-patient communication while simultaneously decreasing the frequency of these excessive E.R. visits. In order to provide a sufficient standard of usefulness and convenience, the design of such a mobile application must be tailored to accommodate the needs of palliative care patients. Palliative care is focused on establishing long-term comfort for people who are often terminally-ill, elderly, handicapped, or otherwise severely disadvantaged. Therefore, a UI intended for palliative care patients must be devoted to simplicity and ease of use. The application must also be robust enough that the user feels that they have been provided with enough capabilities. The majority of this paper is dedicated to overhauling an existing palliative care application, the product of a previous honors thesis project, and implementing a user interface that establishes a simple, positive, and advantageous environment. This is accomplished through techniques such as color-coding, optimizing page layout, increasing customization capabilities, and more. Above all else, this user interface is intended to make the patient’s experience satisfying and trouble-free. They should be able to log in, navigate the application’s features with a few taps of their finger, and log out — all without undergoing any frustration or difficulties.
ContributorsWilkes, Jarrett Matthew (Co-author) / Ganey, David (Co-author) / Dao, Lelan (Co-author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
131386-Thumbnail Image.png
Description
Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.
ContributorsKemmer, Ryan Wyeth (Author) / Escobedo, Adolfo (Thesis director) / Maciejewski, Ross (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
166210-Thumbnail Image.png
Description

Engaging users is essential for designers of any exhibit, such as the human-computer interface, the visual effects, or the informational content. The need to understand users’ experiences and learning gains has motivated a focus on user engagement across computer science. However, there has been limited review of how human-computer interaction

Engaging users is essential for designers of any exhibit, such as the human-computer interface, the visual effects, or the informational content. The need to understand users’ experiences and learning gains has motivated a focus on user engagement across computer science. However, there has been limited review of how human-computer interaction research interprets and employs the concepts in museum and exhibit settings, specifically their joint effects. The purpose of this study is to assess users’ experience and learning outcome, while interacting with a web application part of an exhibit that showcases the NASA Psyche spacecraft model. This web application provides an interactive menu that allows the user to navigate on the touch panel installed within the Psyche Spacecraft Exhibit. The user can press the button on the menu which will light up the corresponding parts of the model with a detailed description displayed on the panel. For this study, participants were required to take a questionnaire, a pretest, and a posttest. They were also required to interact with the web application while wearing an Emotiv EPOC+ EEG headset that measures their emotions while they were visiting the exhibit. During the study, data such as questionnaire results, sensed emotions from the EEG headset, and pretest and posttest scores were collected. Using the information gathered, the study explores user experience and learning gains through both biometrics and traditional tools. The findings show that users felt engaged and frustrated the most and that users gained more knowledge but at varying degrees from the interaction. Future work can be done to lower the levels of frustration and keep learning gains at a more consistent rate by improving the exhibit design to better meet various learning needs and visitor profiles.

ContributorsMa, Yumeng (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Gonzalez Sanchez, Javier (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05