Matching Items (36)

Behavior Trees + Finite State Machines: A Hybrid Game AI Framework

Description

One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation

One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to implement a series of states or decisions instead to give the illusion of intelligence. Despite this limitation, games can still generate a wide range of experiences for the player. The Hybrid Game AI Framework is an AI system that combines the benefits of two commonly used approaches to developing game AI: Behavior Trees and Finite State Machines. Developed in the Unity Game Engine and the C# programming language, this AI Framework represents the research that went into studying modern approaches to game AI and my own attempt at implementing the techniques learned. Object-oriented programming concepts such as inheritance, abstraction, and low coupling are utilized with the intent to create game AI that's easy to implement and expand upon. The final goal was to create a flexible yet structured AI data structure while also minimizing drawbacks by combining Behavior Trees and Finite State Machines.

Contributors

Created

Date Created
2018-05

132671-Thumbnail Image.png

Procedural Scene Generation from Natural Language

Description

While there are many existing systems which take natural language descriptions and use them to generate images or text, few systems exist to generate 3d renderings or environments based on natural language. Most of those systems are very limited in

While there are many existing systems which take natural language descriptions and use them to generate images or text, few systems exist to generate 3d renderings or environments based on natural language. Most of those systems are very limited in scope and require precise, predefined language to work, or large well tagged datasets for their models. In this project I attempt to apply concepts in NLP and procedural generation to a system which can generate a rough scene estimation of a natural language description in a 3d environment from a free use database of models. The primary objective of this system, rather than a completely accurate representation, is to generate a useful or interesting result. The use of such a system comes in assisting designers who utilize 3d scenes or environments for their work.

Contributors

Created

Date Created
2019-05

133010-Thumbnail Image.png

Technology Transformations: SmartAid - An Intelligent First Aid Kit

Description

SmartAid aims to target a small, yet relevant issue in a cost effective, easily replicable, and innovative manner. This paper outlines how to replicate the design and building process to create an intelligent first aid kit. SmartAid utilizes Alexa Voice

SmartAid aims to target a small, yet relevant issue in a cost effective, easily replicable, and innovative manner. This paper outlines how to replicate the design and building process to create an intelligent first aid kit. SmartAid utilizes Alexa Voice Service technologies to provide a new and improved way to teach users about the different types of first aid kit items and how to treat minor injuries, step by step. Using Alexa and RaspberryPi, SmartAid was designed as an added attachment to first aid kits. Alexa Services were installed into a RaspberryPi to create a custom Amazon device, and from there, using the Alexa Interaction Model and the Lambda function services, SmartAid was developed. After the designing and coding of the application, a user guide was created to provide users with information on what items are included in the first aid kit, what types of injuries can be treated through first aid, and how to use SmartAid. The
application was tested for its usability and practicality by a small sample of students. Users provided suggestions on how to make the application more versatile and functional, and confirmed that the application made first aid easier and was something that they could see themselves using. While this application is not aimed to replace the current physical guide solution completely, the findings of this project show that SmartAid has potential to stand in as an improved, easy to use, and convenient alternative for first aid guidance.

Contributors

Created

Date Created
2019-05

133515-Thumbnail Image.png

Instructional Design with Natural Language Processing in a Virtual Reality Environment

Description

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.

Contributors

Agent

Created

Date Created
2018-05

133565-Thumbnail Image.png

Jaipur Simulation and AI

Description

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the difficulty of using a search space to analyze every possible set of future moves. Due to the randomized nature of the deck of cards, the search space rapidly leads to an exponentially growing set of potential game states to analyze when one tries to look more than one turn ahead. The second aspect that poses difficulty is the element of uncertainty that exists from opponent feedback. Certain moves are weak to specific opponent reactions, and these are difficult to predict due to hidden information. To circumvent these problems, the AI uses a greedy approach to decision making, attempting to maximize the value of its plays immediately, and not play for future turns. The agent utilizes conditional statements to evaluate the game state and choose a game action that it deems optimal, a heuristic to place an expected value (EV) of the goods it can choose from, and selects the best one based on this evaluation. Initial implementation of the simulation was done using C++ through a terminal application, and then was translated to a graphical interface using Unity and C#.

Contributors

Agent

Created

Date Created
2018-05

134486-Thumbnail Image.png

Development of an Educational Video Game

Description

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to supplement educational instructions regarding nutrition. The educational game developed, "Nutribots" features the player acting as a nutrition based nanobot sent to the small intestine to help the body. Throughout the game the player will be asked nutrition based questions to test their knowledge of proteins, carbohydrates, and lipids. If the player is unable to answer the question, they must use game mechanics to progress and receive the information as a reward. The level is completed as soon as the question is answered correctly. If the player answers the questions incorrectly twenty times within the entirety of the game, the team loses faith in the player, and the player must reset from title screen. This is to limit guessing and to make sure the player retains the information through repetition once it is demonstrated that they do not know the answers. The team was split into two different groups for the development of this game. The first part of the team developed models, animations, and textures using Autodesk Maya 2016 and Marvelous Designer. The second part of the team developed code and shaders, and implemented products from the first team using Unity and Visual Studio. Once a prototype of the game was developed, it was show-cased amongst peers to gain feedback. Upon receiving feedback, the team implemented the desired changes accordingly. Development for this project began on November 2015 and ended on April 2017. Special thanks to Laura Avila Department Chair and Jennifer Nolz from Glendale Community College Technology and Consumer Sciences, Food and Nutrition Department.

Contributors

Created

Date Created
2017-05

134533-Thumbnail Image.png

Web-Based Classroom Tool for Beginner Java Classes

Description

Learning to program is no easy task, and many students experience their first programming during their university education. Unfortunately, programming classes have a large number of students enrolled, so it is nearly impossible for professors to associate with the students

Learning to program is no easy task, and many students experience their first programming during their university education. Unfortunately, programming classes have a large number of students enrolled, so it is nearly impossible for professors to associate with the students at an individual level and provide the personal attention each student needs. This project aims to provide professors with a tool to quickly respond to the current understanding of the students. This web-based application gives professors the control to quickly ask Java programming questions, and the ability to see the aggregate data on how many of the students have successfully completed the assigned questions. With this system, the students are provided with extra programming practice in a controlled environment, and if there is an error in their program, the system will provide feedback describing what the error means and what steps the student can take to fix it.

Contributors

Agent

Created

Date Created
2017-05

134100-Thumbnail Image.png

Virtual Reality Drum Training System

Description

Can a skill taught in a virtual environment be utilized in the physical world? This idea is explored by creating a Virtual Reality game for the HTC Vive to teach users how to play the drums. The game focuses on

Can a skill taught in a virtual environment be utilized in the physical world? This idea is explored by creating a Virtual Reality game for the HTC Vive to teach users how to play the drums. The game focuses on developing the user's muscle memory, improving the user's ability to play music as they hear it in their head, and refining the user's sense of rhythm. Several different features were included to achieve this such as a score, different levels, a demo feature, and a metronome. The game was tested for its ability to teach and for its overall enjoyability by using a small sample group. Most participants of the sample group noted that they felt as if their sense of rhythm and drumming skill level would improve by playing the game. Through the findings of this project, it can be concluded that while it should not be considered as a complete replacement for traditional instruction, a virtual environment can be successfully used as a learning aid and practicing tool.

Contributors

Created

Date Created
2017-12

134239-Thumbnail Image.png

An Introduction to Fractal Geometry and its Application in the Simulation of Nature

Description

Formerly coined mathematical "monsters," fractals are a compelling concept that dates back hundreds of years. The idea that a shape or set of information could be infinitely deconstructed into multiple copies of itself is both confusing and brilliant. However, throughout

Formerly coined mathematical "monsters," fractals are a compelling concept that dates back hundreds of years. The idea that a shape or set of information could be infinitely deconstructed into multiple copies of itself is both confusing and brilliant. However, throughout its entire history, many scientists and mathematicians have repeatedly dismissed the applicability of self-similarity. The purpose of this study is to explore the path of development of fractal geometry and demonstrate its widely-ignored usefulness. While many students and professionals are unaware of this alternate system for describing natural processes and shapes, several disciplines can benefit from applying fractal geometry to their work.

Contributors

Created

Date Created
2017-05

137137-Thumbnail Image.png

The Emblems: Speech-Recognition in Games

Description

Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the

Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the opposing player's army. This report focuses on the speech-recognition aspect of the project. The players interact on a turn-by-turn basis by speaking commands into the computer's microphone. When the computer recognizes a command, it will respond accordingly by having the player's unit perform an action on screen.

Contributors

Created

Date Created
2014-05