Matching Items (36)
Filtering by

Clear all filters

165084-Thumbnail Image.png
Description
This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes.

This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes. Further, almost all of these games are played on a rectangular grid. Contrarily, this project develops an AI player, referred to as GG-net, to play the online strategy game Warzone, which is based on the classic board game Risk. Warzone is played on a wide variety of irregularly shaped maps. Prior research has struggled to create an effective AI for Risk-like games due to the immense branching factor. The most successful attempts tended to rely on manually restricting the set of actions the AI considered while also engineering useful features for the AI to consider. GG-net uses no human knowledge, but rather a genetic algorithm combined with a graph neural network. Together, these methods allow GG-net to perform competitively across a multitude of maps. GG-net outperformed the built-in rule-based AI by 413 Elo (representing an 80.7% chance of winning) and an approach based on AlphaZero using graph neural networks by 304 Elo (representing a 74.2% chance of winning). This same advantage holds across both seen and unseen maps. GG-net appears to be a strong opponent on both small and medium maps, however, on large maps with hundreds of territories, inefficiencies in GG-net become more significant and GG-net struggles against the rule-based approach. Overall, GG-net was able to successfully learn the game and generalize across maps of a similar size, albeit further work is required for GG-net to become more successful on large maps.
ContributorsBauer, Andrew (Author) / Yang, Yezhou (Thesis director) / Harrison, Blake (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description
This thesis provides an analysis of the potential issues of using ChatGPT, as despite its benefits it does have its concerns that may deter societal progress. The thesis first provides insight into how ChatGPT generates text and provides insight into how the process of generating its outputs can lead to

This thesis provides an analysis of the potential issues of using ChatGPT, as despite its benefits it does have its concerns that may deter societal progress. The thesis first provides insight into how ChatGPT generates text and provides insight into how the process of generating its outputs can lead to a variety of issues in the output such as hallucinated and biased output. After explaining how these issues occur, the thesis focuses on the impact of these issues in important industries such as medicine, education, and security, comparing them to popular open-source models such as Llama and Falcon.
ContributorsTsai, Brandon (Author) / Martin, Thomas (Thesis director) / Shakarian, Paulo (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
Manually determining the health of a plant requires time and expertise from a human. Automating this process utilizing machine learning could provide significant benefits to the agricultural field. The detection and classification of health defects in crops by analyzing visual data using computer vision tools can accomplish this. In this

Manually determining the health of a plant requires time and expertise from a human. Automating this process utilizing machine learning could provide significant benefits to the agricultural field. The detection and classification of health defects in crops by analyzing visual data using computer vision tools can accomplish this. In this paper, the task is completed using two different types of existing machine learning algorithms, ResNet50 and CapsNet, which take images of crops as input and return a classification that denotes the health defect the crop suffers from. Specifically, the models analyze the images to determine if a nutritional deficiency or disease is present and, if so, identify it. The purpose of this project is to apply the proven deep learning architecture, ResNet50, to the data, which serves as a baseline for comparison of performance with the less researched architecture, CapsNet. This comparison highlights differences in the performance of the two architectures when applied to a complex dataset with a multitude of classes. This report details the data pipeline process, including dataset collection and validation, as well as preprocessing and application to the model. Additionally, methods of improving the accuracy of the models are recorded and analyzed to provide further insights into the comparison of the different architectures. The ResNet-50 model achieved an accuracy of 100% after being trained on the nutritional deficiency dataset. It achieved an accuracy of 88.5% on the disease dataset. The CapsNet model achieved an accuracy of 90% on the nutritional deficiency dataset but only 70% on the disease dataset. In comparing the performance of the two models, the ResNet model outperformed the other; however, the CapsNet model shows promise for future implementations. With larger, more complete datasets as well as improvements to the design of capsule networks, they will likely provide exceptional performance for complex image classification tasks.
ContributorsChristner, Drew (Author) / Carter, Lynn (Thesis director) / Ghayekhloo, Samira (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
The rapid growth of published research has increased the time and energy researchers invest in literature review to stay updated in their field. While existing research tools assist with organizing papers, providing basic summaries, and improving search, there is a need for an assistant that copilots researchers to drive innovation. In

The rapid growth of published research has increased the time and energy researchers invest in literature review to stay updated in their field. While existing research tools assist with organizing papers, providing basic summaries, and improving search, there is a need for an assistant that copilots researchers to drive innovation. In response, we introduce buff, a research assistant framework employing large language models to summarize papers, identify research gaps and trends, and recommend future directions based on semantic analysis of the literature landscape, Wikipedia, and the broader internet. We demo buff through a user-friendly chat interface, powered by a citation network encompassing over 5600 research papers, amounting to over 133 million tokens of textual information. buff utilizes a network structure to fetch and analyze factual scientific information semantically. By streamlining the literature review and scientific knowledge discovery process, buff empowers researchers to concentrate their efforts on pushing the boundaries of their fields, driving innovation, and optimizing the scientific research landscape.
ContributorsBalamurugan, Neha (Author) / Arani, Punit (Co-author) / LiKamWa, Robert (Thesis director) / Bhattacharjee, Amrita (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Economics Program in CLAS (Contributor)
Created2024-05
Description
In this thesis, I propose a framework for automatically generating custom orthotic insoles for Intelligent Mobility and ANDBOUNDS. Towards the end, the entire framework works together to ensure users receive the highest quality insoles through human quality checks. Three machine learning models were assembled: the Quality Model, the Meta-point Model,

In this thesis, I propose a framework for automatically generating custom orthotic insoles for Intelligent Mobility and ANDBOUNDS. Towards the end, the entire framework works together to ensure users receive the highest quality insoles through human quality checks. Three machine learning models were assembled: the Quality Model, the Meta-point Model, and the Multimodal Model. The Quality Model ensures that user uploaded foot scans are high quality. The Meta-point Model ensures that the meta-point coordinates assigned to the foot scans are below the required tolerance to align an insole mesh onto a foot scan. The Multimodal Model uses customer foot pain descriptors and the foot scan to customize an insole to the customers’ ailments. The results demonstrate that this is a viable option for insole creation and has the potential to aid or replace human insole designers.
ContributorsNucuta, Raymond (Author) / Osburn, Steven (Thesis director) / Joseph, Jeshua (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique

Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique challenge due to the individual nature of singles and the varying career lengths, experiences, and backgrounds of players from around the globe. Related work has explored prediction with features such as rank differentials, physical characteristics, and past performance. This work expands on the studies by including raw player statistics and relevant environment features. State of the art models such as LightGBM and XGBoost, as well as a standard logistic regression are trained and evaluated against a dataset containing matches from 1991 to 2023. All models surpassed the baseline and each has their own strengths and weaknesses. Future work may involve expanding the feature space to include more robust features such as player profiles and ELO ratings, as well as utilizing deep neural networks to improve understanding of past player performance and better comprehend the context of a given match.
ContributorsBandemer, Nathaniel (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05