Barrett, The Honors College at Arizona State University proudly showcases the work of undergraduate honors students by sharing this collection exclusively with the ASU community.

Barrett accepts high performing, academically engaged undergraduate students and works with them in collaboration with all of the other academic units at Arizona State University. All Barrett students complete a thesis or creative project which is an opportunity to explore an intellectual interest and produce an original piece of scholarly research. The thesis or creative project is supervised and defended in front of a faculty committee. Students are able to engage with professors who are nationally recognized in their fields and committed to working with honors students. Completing a Barrett thesis or creative project is an opportunity for undergraduate honors students to contribute to the ASU academic community in a meaningful way.

Displaying 1 - 1 of 1
Filtering by

Clear all filters

130912-Thumbnail Image.png
Description
Video games often feature agents that the human player interacts with to overcome.
Designing these agents to cover every case of human interaction is difficult, and usually
imperfect, as human players are capable of learning to overcome these agents in unintended
ways. Artificial intelligence is a growing field that seeks to solve problems

Video games often feature agents that the human player interacts with to overcome.
Designing these agents to cover every case of human interaction is difficult, and usually
imperfect, as human players are capable of learning to overcome these agents in unintended
ways. Artificial intelligence is a growing field that seeks to solve problems by simulating
learning in specific environments. The aim of this paper is to explore the applications that the
self play learning branch of artificial intelligence may pose on game development in the future,
and to attempt to implement a working version of a self play agent learning to play a Pokemon
battle. Originally designed Pokemon battle behavior is often suboptimal, getting stuck making
ineffective or incorrect choices, so training a self play model to learn the strategy and structure of
Pokemon battles from a clean slate would result in an organic agent that would outperform the
original behavior of the computer controlled agents. Though unsuccessful in my implementation,
this paper serves as a record of the exploration of this field, and a log of what worked and what
did not, in order to benefit any future person interested in the same topics.
ContributorsCiudad, Erick Marcel (Author) / Meuth, Ryan (Thesis director) / Kobayashi, Yoshihiro (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12