Matching Items (28)

135955-Thumbnail Image.png

LudoNarrare: A Model for Verb Based Interactive Storytelling

Description

Instead of providing the illusion of agency to a reader via a tree or network of prewritten, branching paths, an interactive story should treat the reader as a player who

Instead of providing the illusion of agency to a reader via a tree or network of prewritten, branching paths, an interactive story should treat the reader as a player who has meaningful influence on the story. An interactive story can accomplish this task by giving the player a large toolset for expression in the plot. LudoNarrare, an engine for interactive storytelling, puts "verbs" in this toolset. Verbs are contextual choices of action given to agents in a story that result in narrative events. This paper begins with an analysis and statement of the problem of creating interactive stories. From here, various attempts to solve this problem, ranging from commercial video games to academic research, are given a brief overview to give context to what paths have already been forged. With the background set, the model of interactive storytelling that the research behind LudoNarrare led to is exposed in detail. The section exploring this model contains explanations on what storyworlds are and how they are structured. It then discusses the way these storyworlds can be brought to life. The exposition on the LudoNarrare model finally wraps up by considering the way storyworlds created around this model can be designed. After the concepts of LudoNarrare are explored in the abstract, the story of the engine's research and development and the specifics of its software implementation are given. With LudoNarrare fully explained, the focus then turns to plans for evaluation of its quality in terms of entertainment value, robustness, and performance. To conclude, possible further paths of investigation for LudoNarrare and its model of interactive storytelling are proposed to inspire those who wish to continue in the spirit of the project.

Contributors

Agent

Created

Date Created
  • 2015-12

137481-Thumbnail Image.png

Synthesis and Facilitation: Designing for Secure User Actions

Description

We discuss processes involved in user-centric security design, including the synthesis of goals based on security and usability tasks. We suggest the usage of implicit security and the facilitation of

We discuss processes involved in user-centric security design, including the synthesis of goals based on security and usability tasks. We suggest the usage of implicit security and the facilitation of secureuser actions. We propose a process for evaluating usability flaws by treating them as security threats and adapting traditional HCI methods. We discuss how to correct these flaws once they are discovered. Finally, we discuss the Usable Security Development Model for developing usable secure systems.

Contributors

Agent

Created

Date Created
  • 2013-05

136074-Thumbnail Image.png

Intelligent Input Parser for Organic Chemistry Nomenclature Questions

Description

For many pre-health and graduate programs, organic chemistry is often the most difficult prerequisite course that students will take. To alleviate this difficulty, an intelligent tutoring system was developed to

For many pre-health and graduate programs, organic chemistry is often the most difficult prerequisite course that students will take. To alleviate this difficulty, an intelligent tutoring system was developed to provide valuable feedback to practice problems within organic chemistry. This paper focuses on the design and use of an intelligent input parser for nomenclature questions within this system. Students in Dr. Gould's Fall 2014 organic chemistry class used this system and their data was collected to analyze the effectiveness of the input parser. Overall the students' feedback was optimistic and there was a positive relationship between test scores and student use of the system.

Contributors

Agent

Created

Date Created
  • 2015-05

129562-Thumbnail Image.png

Studying, Teaching and Applying Sustainability Visions Using Systems Modeling

Description

The objective of articulating sustainability visions through modeling is to enhance the outcomes and process of visioning in order to successfully move the system toward a desired state. Models emphasize

The objective of articulating sustainability visions through modeling is to enhance the outcomes and process of visioning in order to successfully move the system toward a desired state. Models emphasize approaches to develop visions that are viable and resilient and are crafted to adhere to sustainability principles. This approach is largely assembled from visioning processes (resulting in descriptions of desirable future states generated from stakeholder values and preferences) and participatory modeling processes (resulting in systems-based representations of future states co-produced by experts and stakeholders). Vision modeling is distinct from normative scenarios and backcasting processes in that the structure and function of the future desirable state is explicitly articulated as a systems model. Crafting, representing and evaluating the future desirable state as a systems model in participatory settings is intended to support compliance with sustainability visioning quality criteria (visionary, sustainable, systemic, coherent, plausible, tangible, relevant, nuanced, motivational and shared) in order to develop rigorous and operationalizable visions. We provide two empirical examples to demonstrate the incorporation of vision modeling in research practice and education settings. In both settings, vision modeling was used to develop, represent, simulate and evaluate future desirable states. This allowed participants to better identify, explore and scrutinize sustainability solutions.

Contributors

Agent

Created

Date Created
  • 2014-07-01

158831-Thumbnail Image.png

Automatic Classification of Small Group Dynamics using Speech and Collaborative Writing

Description

Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance.

Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom problem solving activities, such as card sorting and handwriting.

Transfer learning using different representations was also studied with a goal of building collaboration detectors for one task can be used with a new task. Data for building such detectors were collected in the form of verbal interaction and user action logs from students’ tablets. Three qualitative levels of interactivity were distinguished: Collaboration, Cooperation and Asymmetric Contribution. Machine learning was used to induce a classifier that can assign a code for every episode based on the set of features. The results indicate that machine learned classifiers were reliable and can transfer.

Contributors

Agent

Created

Date Created
  • 2020

152168-Thumbnail Image.png

An intelligent co-reference resolver for Winograd schema sentences containing resolved semantic entities

Description

There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to

There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities, reasoning abilities and background knowledge to get the answer right. It involves a coreference resolution task in which a machine is given a sentence containing a situation which involves two entities, one pronoun and some more information about the situation and the machine has to come up with the right resolution of a pronoun to one of the entities. The complexity of the task is increased with the fact that the Winograd sentences are not constrained by one domain or specific sentence structure and it also contains a lot of human proper names. This modification makes the task of association of entities, to one particular word in the sentence, to derive the answer, difficult. I have developed a pronoun resolver system for the confined domain Winograd sentences. I have developed a classifier or filter which takes input sentences and decides to accept or reject them based on a particular criteria. Once the sentence is accepted. I run parsers on it to obtain the detailed analysis. Furthermore I have developed four answering modules which use world knowledge and inferencing mechanisms to try and resolve the pronoun. The four techniques I use are : ConceptNet knowledgebase, Search engine pattern counts,Narrative event chains and sentiment analysis. I have developed a particular aggregation mechanism for the answers from these modules to arrive at a final answer. I have used caching technique for the association relations that I obtain for different modules, so as to boost the performance. I run my system on the standard ‘nyu dataset’ of Winograd sentences and questions. This dataset is then restricted, by my classifier, to 90 sentences. I evaluate my system on this 90 sentence dataset. When I compare my results against the state of the art system on the same dataset, I get nearly 4.5 % improvement in the restricted domain.

Contributors

Agent

Created

Date Created
  • 2013

157027-Thumbnail Image.png

Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

Description

Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by

Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.

Contributors

Agent

Created

Date Created
  • 2019

158027-Thumbnail Image.png

Automatic Programming Code Explanation Generation with Structured Translation Models

Description

Learning programming involves a variety of complex cognitive activities, from abstract knowledge construction to structural operations, which include program design,modifying, debugging, and documenting tasks. In this work, the objective was

Learning programming involves a variety of complex cognitive activities, from abstract knowledge construction to structural operations, which include program design,modifying, debugging, and documenting tasks. In this work, the objective was to explore and investigate the barriers and obstacles that programming novice learners encountered and how the learners overcome them. Several lab and classroom studies were designed and conducted, the results showed that novice students had different behavior patterns compared to experienced learners, which indicates obstacles encountered. The studies also proved that proper assistance could help novices find helpful materials to read. However, novices still suffered from the lack of background knowledge and the limited cognitive load while learning, which resulted in challenges in understanding programming related materials, especially code examples. Therefore, I further proposed to use the natural language generator (NLG) to generate code explanations for educational purposes. The natural language generator is designed based on Long Short Term Memory (LSTM), a deep-learning translation model. To establish the model, a data set was collected from Amazon Mechanical Turks (AMT) recording explanations from human experts for programming code lines.

To evaluate the model, a pilot study was conducted and proved that the readability of the machine generated (MG) explanation was compatible with human explanations, while its accuracy is still not ideal, especially for complicated code lines. Furthermore, a code-example based learning platform was developed to utilize the explanation generating model in programming teaching. To examine the effect of code example explanations on different learners, two lab-class experiments were conducted separately ii in a programming novices’ class and an advanced students’ class. The experiment result indicated that when learning programming concepts, the MG code explanations significantly improved the learning Predictability for novices compared to control group, and the explanations also extended the novices’ learning time by generating more material to read, which potentially lead to a better learning gain. Besides, a completed correlation model was constructed according to the experiment result to illustrate the connections between different factors and the learning effect.

Contributors

Agent

Created

Date Created
  • 2020

150293-Thumbnail Image.png

Zazzer: forming friendships on digital social networks

Description

Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social

Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social networks seem not to encourage it because either their purpose isn't compatible with introducing strangers or the prevalent methods of introduction aren't effective enough to merit use over real word alternatives. This paper presents a novel digital social network emphasizing creating friendships. Research has shown video chat communication can reach in-person levels of trust; coupled with a game environment to ease the discomfort people often have interacting with strangers and a recommendation engine, Zazzer, the presented system, allows people to meet and get to know each other in a manner much more true to real life than traditional methods. Its network also allows players to continue to communicate afterwards. The evaluation looks at real world use, measuring the frequency with which players choose the video chat game versus alternative, more traditional methods of online introduction. It also looks at interactions after the initial meeting to discover how effective video chat games are in creating sticky social connections. After initial use it became apparent a critical mass of users would be necessary to draw strong conclusions, however the collected data seemed to give preliminary support to the idea that video chat games are more effective than traditional ways of meeting online in creating new relationships.

Contributors

Agent

Created

Date Created
  • 2011

152976-Thumbnail Image.png

Spoken dialogue in face-to-face and remote collaborative learning environments

Description

Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more

Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses (MOOCs), students across the world are able to access and learn material remotely. This creates a need for tools that support distant or remote collaboration. In order to build such tools we need to understand the basic elements of remote collaboration and how it differs from traditional face-to-face collaboration.

The main goal of this thesis is to explore how spoken dialogue varies in face-to-face and remote collaborative learning settings. Speech data is collected from student participants solving mathematical problems collaboratively on a tablet. Spoken dialogue is analyzed based on conversational and acoustic features in both the settings. Looking for collaborative differences of transactivity and dialogue initiative, both settings are compared in detail using machine learning classification techniques based on acoustic and prosodic features of speech. Transactivity is defined as a joint construction of knowledge by peers. The main contributions of this thesis are: a speech corpus to analyze spoken dialogue in face-to-face and remote settings and an empirical analysis of conversation, collaboration, and speech prosody in both the settings. The results from the experiments show that amount of overlap is lower in remote dialogue than in the face-to-face setting. There is a significant difference in transactivity among strangers. My research benefits the computer-supported collaborative learning community by providing an analysis that can be used to build more efficient tools for supporting remote collaborative learning.

Contributors

Agent

Created

Date Created
  • 2014