Matching Items (10)
Filtering by

Clear all filters

150848-Thumbnail Image.png
Description
The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications.

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications. To accomplish this, the GALLAG system uses a combination of sensing technologies, remote audio/video feedback, mobile devices and an application programming interface (API) to empower users to create their own context-aware applications. However, the API requires programming through source code, a task that is too complicated and abstract for many users. This thesis presents GALLAG Strip, a novel approach to programming sensor-based context-aware applications that combines the Programming With Demonstration technique and a mobile device to enable users to experience their applications as they program them. GALLAG Strip lets users create sensor-based context-aware applications in an intuitive and appealing way without the need of computer programming skills; instead, they program their applications by physically demonstrating their envisioned interactions within a space using the same interface that they will later use to interact with the system, that is, using GALLAG-compatible sensors and mobile devices. GALLAG Strip was evaluated through a study with end users in a real world setting, measuring their ability to program simple and complex applications accurately and in a timely manner. The evaluation also comprises a benchmark with expert GALLAG system programmers in creating the same applications. Data and feedback collected from the study show that GALLAG Strip successfully allows users to create sensor-based context-aware applications easily and accurately without the need of prior programming skills currently required by the GALLAG system and enables them to create almost all of their envisioned applications.
ContributorsGarduno Massieu, Luis (Author) / Burleson, Winslow (Thesis advisor) / Hekler, Eric (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2012
151151-Thumbnail Image.png
Description
Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it

Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it may be delivered in a manner that makes it engaging while negating the need to purchase special equipment. This thesis presents a case study in the form of a time critical, team based medical scenario known as Advanced Cardiac Life Support (ACLS). A framework and methodology associated with the design of a VR trainer for ACLS is detailed. In addition, in order to potentially provide an engaging experience, the simulator was designed to incorporate immersive elements and a multimodal interface (haptic, visual, and auditory). A study was conducted to test two primary hypotheses namely: a meaningful transfer of skill is achieved from virtual reality training to real world mock codes and the presence of immersive components in virtual reality leads to an increase in the performance gained. The participant pool consisted of 54 clinicians divided into 9 teams of 6 members each. The teams were categorized into three treatment groups: immersive VR (3 teams), minimally immersive VR (3 teams), and control (3 teams). The study was conducted in 4 phases from a real world mock code pretest to assess baselines to a 30 minute VR training session culminating in a final mock code to assess the performance change from the baseline. The minimally immersive team was treated as control for the immersive components. The teams were graded, in both VR and mock code sessions, using the evaluation metric used in real world mock codes. The study revealed that the immersive VR groups saw greater performance gain from pretest to posttest than the minimally immersive and control groups in case of the VFib/VTach scenario (~20% to ~5%). Also the immersive VR groups had a greater performance gain than the minimally immersive groups from the first to the final session of VFib/VTach (29% to -13%) and PEA (27% to 15%).
ContributorsVankipuram, Akshay (Author) / Li, Baoxin (Thesis advisor) / Burleson, Winslow (Committee member) / Kahol, Kanav (Committee member) / Arizona State University (Publisher)
Created2012
137541-Thumbnail Image.png
Description
Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.
ContributorsJohnson, David Bradley (Author) / Calliss, Debra (Thesis director) / Wilkerson, Kelly (Committee member) / Walker, Erin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
131386-Thumbnail Image.png
Description
Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.
ContributorsKemmer, Ryan Wyeth (Author) / Escobedo, Adolfo (Thesis director) / Maciejewski, Ross (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
135018-Thumbnail Image.png
Description
The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year has seen the rise of smart home assistants that represent the next step in human-computer interaction with their advanced use of natural language processing. This project seeks to quell the issues with the former by exploring a possible fusion of a powerful, feature-rich software-defined networking stack and the incredible natural language processing tools of smart home assistants. To accomplish these ends, a piece of software was developed to leverage the powerful natural language processing capabilities of one such smart home assistant, the Amazon Echo. On one end, this software interacts with Amazon Web Services to retrieve information about a user's speech patterns and key information contained in their speech. On the other end, the software joins that information with its previous session state to intelligently translate speech into a series of commands for the separate components of a networking stack. The software developed for this project empowers a user to quickly make changes to several facets of their networking gear or acquire information about it with just their language \u2014 no terminals, java applets, or web configuration interfaces needed, thus circumventing clunky UI's or jumping from shell to shell. It is the author's hope that showing how networking equipment can be configured in this innovative way will draw more attention to the current failings of networking equipment and inspire a new series of intuitive user interfaces.
ContributorsHermens, Ryan Joseph (Author) / Meuth, Ryan (Thesis director) / Burger, Kevin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
166210-Thumbnail Image.png
Description

Engaging users is essential for designers of any exhibit, such as the human-computer interface, the visual effects, or the informational content. The need to understand users’ experiences and learning gains has motivated a focus on user engagement across computer science. However, there has been limited review of how human-computer interaction

Engaging users is essential for designers of any exhibit, such as the human-computer interface, the visual effects, or the informational content. The need to understand users’ experiences and learning gains has motivated a focus on user engagement across computer science. However, there has been limited review of how human-computer interaction research interprets and employs the concepts in museum and exhibit settings, specifically their joint effects. The purpose of this study is to assess users’ experience and learning outcome, while interacting with a web application part of an exhibit that showcases the NASA Psyche spacecraft model. This web application provides an interactive menu that allows the user to navigate on the touch panel installed within the Psyche Spacecraft Exhibit. The user can press the button on the menu which will light up the corresponding parts of the model with a detailed description displayed on the panel. For this study, participants were required to take a questionnaire, a pretest, and a posttest. They were also required to interact with the web application while wearing an Emotiv EPOC+ EEG headset that measures their emotions while they were visiting the exhibit. During the study, data such as questionnaire results, sensed emotions from the EEG headset, and pretest and posttest scores were collected. Using the information gathered, the study explores user experience and learning gains through both biometrics and traditional tools. The findings show that users felt engaged and frustrated the most and that users gained more knowledge but at varying degrees from the interaction. Future work can be done to lower the levels of frustration and keep learning gains at a more consistent rate by improving the exhibit design to better meet various learning needs and visitor profiles.

ContributorsMa, Yumeng (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Gonzalez Sanchez, Javier (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
156774-Thumbnail Image.png
Description
Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing

Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing and/or maintaining a productive learning path. This work combined research and best practices from affective computing, intelligent tutoring systems, and educational technology to address the design and implementation of an affective agent and corresponding pedagogical interventions. It included the incorporation of the affective agent into an Exploratory Learning Environment (ELE) adapted for this research.

A gendered, three-dimensional, animated, human-like character accompanied by text- and speech-based dialogue visually represented the proposed affective agent. The agent’s pedagogical interventions considered inputs from the ELE (interface, model building, and performance events) and from the user (emotional and cognitive events). The user’s emotional events captured by biometric sensors and processed by a decision-level fusion algorithm for a multimodal system in combination with the events from the ELE informed the production-rule-based behavior engine to define and trigger pedagogical interventions. The pedagogical interventions were focused on affective dimensions and occurred in the form of affective dialogue prompts and animations.

An experiment was conducted to assess the impact of the affective agent, Hope, on the student’s learning experience and performance. In terms of the student’s learning experience, the effect of the agent was analyzed in four components: perception of the instructional material, perception of the usefulness of the agent, ELE usability, and the affective responses from the agent triggered by the student’s affective states.

Additionally, in terms of the student’s performance, the effect of the agent was analyzed in five components: tasks completed, time spent solving a task, planning time while solving a task, usage of the provided help, and attempts to successfully complete a task. The findings from the experiment did not provide the anticipated results related to the effect of the agent; however, the results provided insights to improve diverse components in the design of affective agents as well as for the design of the behavior engines and algorithms to detect, represent, and handle affective information.
ContributorsChavez Echeagaray, Maria Elena (Author) / Atkinson, Robert K (Thesis advisor) / Burleson, Winslow (Thesis advisor) / Graesser, Arthur C. (Committee member) / VanLehn, Kurt (Committee member) / Walker, Erin A (Committee member) / Arizona State University (Publisher)
Created2018
157095-Thumbnail Image.png
Description
An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only

An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only useful, but also novel. In practice, there are initiatives such as Free and Open Source Software communities developing innovative software. In research, the field of crowdsourced creativity, which attempts to design scalable support mechanisms, is blooming. However, both contexts still present many opportunities for advancement.

In this dissertation, I seek to advance both the knowledge of limitations in current technologies used in practice as well as the mechanisms that can be used for large-scale support. The overall research question I explore is: “How can we support large-scale creative collaboration in distributed online communities?” I first advance existing support techniques by evaluating the impact of active support in brainstorming performance. Furthermore, I leverage existing theoretical models of individual idea generation as well as recommender system techniques to design CrowdMuse, a novel adaptive large-scale idea generation system. CrowdMuse models users in order to adapt itself to each individual. I evaluate the system’s efficacy through two large-scale studies. I also advance knowledge of current large-scale practices by examining common communication channels under the lens of Creativity Support Tools, yielding a list of creativity bottlenecks brought about by the affordances of these channels. Finally, I connect both ends of this dissertation by deploying CrowdMuse in an Open Source online community for two weeks. I evaluate their usage of the system as well as its perceived benefits and issues compared to traditional communication tools.

This dissertation makes the following contributions to the field of large-scale creativity: 1) the design and evaluation of a first-of-its-kind adaptive brainstorming system; 2) the evaluation of the effects of active inspirations compared to simple idea exposure; 3) the development and application of a set of creativity support design heuristics to uncover creativity bottlenecks; and 4) an exploration of large-scale brainstorming systems’ usefulness to online communities.
Contributorsda Silva Girotto, Victor Augusto (Author) / Walker, Erin A (Thesis advisor) / Burleson, Winslow (Thesis advisor) / Maciejewski, Ross (Committee member) / Hsiao, Sharon (Committee member) / Bigham, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2019
155200-Thumbnail Image.png
Description
Affect signals what humans care about and is involved in rational decision-making and action selection. Many technologies may be improved by the capability to recognize human affect and to respond adaptively by appropriately modifying their operation. This capability, named affect-driven self-adaptation, benefits systems as diverse as learning environments, healthcare applications,

Affect signals what humans care about and is involved in rational decision-making and action selection. Many technologies may be improved by the capability to recognize human affect and to respond adaptively by appropriately modifying their operation. This capability, named affect-driven self-adaptation, benefits systems as diverse as learning environments, healthcare applications, and video games, and indeed has the potential to improve systems that interact intimately with users across all sectors of society. The main challenge is that existing approaches to advancing affect-driven self-adaptive systems typically limit their applicability by supporting the creation of one-of-a-kind systems with hard-wired affect recognition and self-adaptation capabilities, which are brittle, costly to change, and difficult to reuse. A solution to this limitation is to leverage the development of affect-driven self-adaptive systems with a manufacturing vision.

This dissertation demonstrates how using a software product line paradigm can jumpstart the development of affect-driven self-adaptive systems with that manufacturing vision. Applying a software product line approach to the affect-driven self-adaptive domain provides a comprehensive, flexible and reusable infrastructure of components with mechanisms to monitor a user’s affect and his/her contextual interaction with a system, to detect opportunities for improvements, to select a course of action, and to effect changes. It also provides a domain-specific architecture and well-documented process guidelines, which facilitate an understanding of the organization of affect-driven self-adaptive systems and their implementation by systematically customizing the infrastructure to effectively address the particular requirements of specific systems.

The software product line approach is evaluated by applying it in the development of learning environments and video games that demonstrate the significant potential of the solution, across diverse development scenarios and applications.

The key contributions of this work include extending self-adaptive system modeling, implementing a reusable infrastructure, and leveraging the use of patterns to exploit the commonalities between systems in the affect-driven self-adaptation domain.
ContributorsGonzalez-Sanchez, Javier (Author) / Burleson, Winslow (Thesis advisor) / Collofello, James (Thesis advisor) / Garlan, David (Committee member) / Sarjoughian, Hessam S. (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2016
135938-Thumbnail Image.png
Description
Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily

Palliative care is a field that serves to benefit enormously from the introduction of mobile medical applications. Doctors at the Mayo Clinic intend to address a reoccurring dilemma, in which palliative care patients visit the emergency room during situations that are not urgent or life-threatening. Doing so unnecessarily drains the hospital’s resources, and it prevents the patient’s physician from applying specialized care that would better suit the patient’s individual needs. This scenario is detrimental to all involved. A mobile medical application seeks to foster doctor-patient communication while simultaneously decreasing the frequency of these excessive E.R. visits. In order to provide a sufficient standard of usefulness and convenience, the design of such a mobile application must be tailored to accommodate the needs of palliative care patients. Palliative care is focused on establishing long-term comfort for people who are often terminally-ill, elderly, handicapped, or otherwise severely disadvantaged. Therefore, a UI intended for palliative care patients must be devoted to simplicity and ease of use. The application must also be robust enough that the user feels that they have been provided with enough capabilities. The majority of this paper is dedicated to overhauling an existing palliative care application, the product of a previous honors thesis project, and implementing a user interface that establishes a simple, positive, and advantageous environment. This is accomplished through techniques such as color-coding, optimizing page layout, increasing customization capabilities, and more. Above all else, this user interface is intended to make the patient’s experience satisfying and trouble-free. They should be able to log in, navigate the application’s features with a few taps of their finger, and log out — all without undergoing any frustration or difficulties.
ContributorsWilkes, Jarrett Matthew (Co-author) / Ganey, David (Co-author) / Dao, Lelan (Co-author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12