Matching Items (60)
155962-Thumbnail Image.png
Description
Programming is quickly becoming as ubiquitous and essential a skill as general mathematics. However, many elementary and high school students are still not aware of what the computer science field entails. To make matters worse, students who are introduced to computer science are frequently being fed only part of what

Programming is quickly becoming as ubiquitous and essential a skill as general mathematics. However, many elementary and high school students are still not aware of what the computer science field entails. To make matters worse, students who are introduced to computer science are frequently being fed only part of what it is about rather than its entire construction. Consequently, they feel out of their depth when they approach college. Research has discovered that by teaching computer science and programming through a problem-driven approach and focusing on a combination of syntax and computational thinking, students can be prepared when entering higher levels of computer science education.

This thesis describes the design, development, and early user testing of a theory-based virtual world for computer science instruction called System Dot. System Dot was designed to visually manifest programming instructions into interactable objects, giving players a way to see coding as tangible entities rather than text on a white screen. In order for System Dot to convey the true nature of computer science, a custom predictive recursive descent parser was embedded in the program to validate any user-generated solutions to pre-defined logical platforming puzzles.

Steps were taken to adapt the virtual world to player behavior by creating a system to detect their learning style playing the game. Through a dynamic Bayesian network, System Dot aims to classify a player’s learning style based on the Felder-Sylverman Learning Style Model (FSLSM). Testers played through the first half of System Dot, which was enough to test out the Bayesian network and initial learning style classification. This classification was then compared to the assessment by Felder’s Index of Learning Styles Questionnaire (ILSQ). Lastly, this thesis will also discuss ways to use the results from the user testing to implement a personalized feedback system for the virtual world in the future and what has been learned through the learning style method.
ContributorsKury, Nizar (Author) / Nelson, Brian C (Thesis advisor) / Hsiao, Ihan (Committee member) / Kobayashi, Yoshihiro (Committee member) / Arizona State University (Publisher)
Created2017
156771-Thumbnail Image.png
Description
Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert

Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert and, as a result, the scope of a robot's autonomy and ability to safely explore and learn in new and unforeseen environments is constrained by the specifics of the designed reward function. In this thesis, I design and implement a stateful collision anticipation model with powerful predictive capability based upon my research of sequential data modeling and modern recurrent neural networks. I also develop deep reinforcement learning methods whose rewards are generated by self-supervised training and intrinsic signals. The main objective is to work towards the development of resilient robots that can learn to anticipate and avoid damaging interactions by combining visual and proprioceptive cues from internal sensors. The introduced solutions are inspired by pain pathways in humans and animals, because such pathways are known to guide decision-making processes and promote self-preservation. A new "robot dodge ball' benchmark is introduced in order to test the validity of the developed algorithms in dynamic environments.
ContributorsRichardson, Trevor W (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2018
157202-Thumbnail Image.png
Description
In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an

In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an identification network, a new sampling-based motion planner, Learn and Link, is introduced. This planner leverages critical regions to overcome the limitations of uniform sampling while still maintaining guarantees of correctness inherent to sampling-based algorithms. Learn and Link is evaluated against planners from the Open Motion Planning Library (OMPL) on an extensive suite of challenging navigation planning problems. This work shows that critical areas of an environment are learnable, and can be used by Learn and Link to solve MP problems with far less planning time than existing sampling-based planners.
ContributorsMolina, Daniel, M.S (Author) / Srivastava, Siddharth (Thesis advisor) / Li, Baoxin (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2019
157311-Thumbnail Image.png
Description
Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and

Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and efficient representations for many problem domains that require complex reasoning.

However, while ASP is effective on deterministic problem domains, it is not suitable for applications involving quantitative uncertainty, for example, those that require probabilistic reasoning. Furthermore, it is hard to utilize information that can be statistically induced from data with ASP problem modeling.

This dissertation presents the language LP^MLN, which is a probabilistic extension of the stable model semantics with the concept of weighted rules, inspired by Markov Logic. An LP^MLN program defines a probability distribution over "soft" stable models, which may not satisfy all rules, but the more rules with the bigger weights they satisfy, the bigger their probabilities. LP^MLN takes advantage of both ASP and Markov Logic in a single framework, allowing representation of problems that require both logical and probabilistic reasoning in an intuitive and elaboration tolerant way.

This dissertation establishes formal relations between LP^MLN and several other formalisms, discusses inference and weight learning algorithms under LP^MLN, and presents systems implementing the algorithms. LP^MLN systems can be used to compute other languages translatable into LP^MLN.

The advantage of LP^MLN for probabilistic reasoning is illustrated by a probabilistic extension of the action language BC+, called pBC+, defined as a high-level notation of LP^MLN for describing transition systems. Various probabilistic reasoning about transition systems, especially probabilistic diagnosis, can be modeled in pBC+ and computed using LP^MLN systems. pBC+ is further extended with the notion of utility, through a decision-theoretic extension of LP^MLN, and related with Markov Decision Process (MDP) in terms of policy optimization problems. pBC+ can be used to represent (PO)MDP in a succinct and elaboration tolerant way, which enables planning with (PO)MDP algorithms in action domains whose description requires rich KR constructs, such as recursive definitions and indirect effects of actions.
ContributorsWang, Yi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Natarajan, Sriraam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2019
156971-Thumbnail Image.png
Description
Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present

Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present memory-augmented neural networks that synthesize robot navigation policies which a) encode long-term temporal dependencies b) make decisions in

partially observed environments and c) quantify the uncertainty inherent in the task.

We extract information about the temporal structure of a task via imitation learning

from human demonstration and evaluate the performance of the models on control

policies for a robot navigation task. Experiments are performed in partially observed

environments in both simulation and the real world
ContributorsSrivatsav, Nambi (Author) / Ben Amor, Hani (Thesis advisor) / Srivastava, Siddharth (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2018
131525-Thumbnail Image.png
Description
The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark

The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark fantasy theme. We will first be exploring the challenges that came
with programming my own game - not quite from scratch, but also without a prebuilt
engine - then transition into game design and how Helix has evolved from its original form
to what we see today.
ContributorsDiscipulo, Isaiah K (Author) / Meuth, Ryan (Thesis director) / Kobayashi, Yoshihiro (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136495-Thumbnail Image.png
Description
The objective of this project concentrates on the game Defense of the Ancients 2 (Dota 2). In this game, players are constantly striving to improve their skills, which are fueled by the competitive nature of the game. The design influences the community to engage in this interaction as they play

The objective of this project concentrates on the game Defense of the Ancients 2 (Dota 2). In this game, players are constantly striving to improve their skills, which are fueled by the competitive nature of the game. The design influences the community to engage in this interaction as they play the game cooperatively. This thesis illustrates the importance of player interaction in influencing design as well as how imperative design is in affecting player interaction. These two concepts are not separate, but are deeply entwined. Every action performed within a game has to interact with some element of design. Both determine how games become defined as competitive, casual, or creative. Game designers can benefit from this study as it reinforces the basics of developing a game for players to interact with. However, it is impossible to predict exactly how players will react to a designed element. Designers should remember to tailor the game towards their audience, but also react and change the game depending on how players are using the elements of design. In addition, players should continue to push the boundaries of games to help designers adapt their product to their audience. If there is not constant communication between players and designers, games will not be tailored appropriately. Pushing the limits of a game benefits the players as well as the designers to make a more complete game. Designers do not solely create a game for the players. Rather, players design the game for themselves. Keywords: game design, player interaction, affinity space, emergent behavior, Dota 2
ContributorsLarsen, Austin James (Author) / Gee, James Paul (Thesis director) / Holmes, Jeffrey (Committee member) / Kobayashi, Yoshihiro (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / School of Arts, Media and Engineering (Contributor)
Created2015-05
133743-Thumbnail Image.png
Description
This project is a Game Engine for 2D Fighting Games which uses Simple DirectMedia Layer and C++. The Game Engine's goal is to model the conventions the genre has for dynamically handling combat between two characters. The characters can be in a variety of different states that animate certain features

This project is a Game Engine for 2D Fighting Games which uses Simple DirectMedia Layer and C++. The Game Engine's goal is to model the conventions the genre has for dynamically handling combat between two characters. The characters can be in a variety of different states that animate certain features while also responding to the environment based on key statuses. There is a playable test game that is the subject of a user study. The Game Engine's capabilities are shown by the test game and the limitations / missing features are discussed.
ContributorsStanton, Nicholas Scott (Author) / Kobayashi, Yoshihiro (Thesis director) / Hansford, Dianne (Committee member) / Computer Science and Engineering Program (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133565-Thumbnail Image.png
Description
This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the difficulty of using a search space to analyze every possible set of future moves. Due to the randomized nature of the deck of cards, the search space rapidly leads to an exponentially growing set of potential game states to analyze when one tries to look more than one turn ahead. The second aspect that poses difficulty is the element of uncertainty that exists from opponent feedback. Certain moves are weak to specific opponent reactions, and these are difficult to predict due to hidden information. To circumvent these problems, the AI uses a greedy approach to decision making, attempting to maximize the value of its plays immediately, and not play for future turns. The agent utilizes conditional statements to evaluate the game state and choose a game action that it deems optimal, a heuristic to place an expected value (EV) of the goods it can choose from, and selects the best one based on this evaluation. Initial implementation of the simulation was done using C++ through a terminal application, and then was translated to a graphical interface using Unity and C#.
ContributorsOrr, James Christopher (Author) / Kobayashi, Yoshihiro (Thesis director) / Selgrad, Justin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137137-Thumbnail Image.png
Description
Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the opposing player's army. This report focuses on the speech-recognition aspect

Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the opposing player's army. This report focuses on the speech-recognition aspect of the project. The players interact on a turn-by-turn basis by speaking commands into the computer's microphone. When the computer recognizes a command, it will respond accordingly by having the player's unit perform an action on screen.
ContributorsNguyen, Jordan Ngoc (Author) / Kobayashi, Yoshihiro (Thesis director) / Maciejewski, Ross (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05