Barrett, The Honors College at Arizona State University proudly showcases the work of undergraduate honors students by sharing this collection exclusively with the ASU community.

Barrett accepts high performing, academically engaged undergraduate students and works with them in collaboration with all of the other academic units at Arizona State University. All Barrett students complete a thesis or creative project which is an opportunity to explore an intellectual interest and produce an original piece of scholarly research. The thesis or creative project is supervised and defended in front of a faculty committee. Students are able to engage with professors who are nationally recognized in their fields and committed to working with honors students. Completing a Barrett thesis or creative project is an opportunity for undergraduate honors students to contribute to the ASU academic community in a meaningful way.

Displaying 1 - 10 of 91
Filtering by

Clear all filters

Description

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.

ContributorsMallya, Aatmik (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

This paper explores the intersection of user experience and museums through interactive and immersive exhibits. It discusses the background and history of the art museum and the field of UX and describes how interactivity and immersion impact visitors and change the exhibit development process. The implications of interactive and immersive

This paper explores the intersection of user experience and museums through interactive and immersive exhibits. It discusses the background and history of the art museum and the field of UX and describes how interactivity and immersion impact visitors and change the exhibit development process. The implications of interactive and immersive exhibits on the museum space are detailed including: social media, the authenticity of objects, and the commodification of experience. It is argued that despite the drawbacks of interactivity and immersion in the museum, the potential benefits of audience engagement and social connection make them worth pursuing.

ContributorsHong, Harrison (Author) / Boyce-Jacino, Katherine (Thesis director) / Carrasquilla, Christina (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

The purpose of this thesis is to contextualise Hindsight, a sustainability-focused historically based city-simulation and resource management game built by the author. The game and game engine were coded from scratch using the C# programming language and the Unity game development suite of tools. The game focuses on the management

The purpose of this thesis is to contextualise Hindsight, a sustainability-focused historically based city-simulation and resource management game built by the author. The game and game engine were coded from scratch using the C# programming language and the Unity game development suite of tools. The game focuses on the management of the city of London in two time periods, London from 1850 and the other set in 2050. Both versions of the city are divided into 21 zones, each of which can be managed by the player through the construction, upgrading, or destruction of various buildings within the zone. The player must manage both the city’s resources and the resources of the environment upon which the city depends in order to bring about a more sustainable future and bring the 2050-era version of the city back from the brink of environmental devastation. Along the way, the player must address the cultural views of the society they are managing to ensure their reforms will be accepted and can also see those views slowly change over time. The goal of the game is to provide an interactive learning experience for both the historical element of London and the importance of making sustainable choices.

ContributorsMeling, Kristian (Author) / Jakubczak, Laura (Thesis director) / Selgrad, Justin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor)
Created2023-05
Description

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as size, labeling, and format of the training data. The findings suggest that reducing the training dataset size can lead to a decrease in image coherence, indicating that AI models get worse as the training dataset gets smaller. Moreover, the study makes surprising discoveries regarding AI image generation models that are trained on highly varied datasets. In addition, the study involves a survey in which people were asked to rate the subjective realism of the generated images on a scale ranging from 1 to 5 as well as sorting the images into their respective classes. The findings of this study emphasize the importance of considering dataset variance and size as a critical aspect of improving image generation models as well as the implications of using AI technology in the future.

ContributorsPunyamurthula, Rushil (Author) / Carter, Lynn (Thesis director) / Sarmento, Rick (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

With the increasing popularity of AI and machine learning, human-AI teaming has a wide range of applications in transportation, healthcare, the military, manufacturing, and people’s everyday life. Measurement of human-AI team effectiveness is essential for guiding the design of AI and evaluating human-AI teams. To develop suitable measures of human-AI

With the increasing popularity of AI and machine learning, human-AI teaming has a wide range of applications in transportation, healthcare, the military, manufacturing, and people’s everyday life. Measurement of human-AI team effectiveness is essential for guiding the design of AI and evaluating human-AI teams. To develop suitable measures of human-AI teamwork effectiveness, we created a search and rescue task environment in Minecraft, in which Artificial Social Intelligence (ASI) agents inferred human teams’ mental states, predicted their actions, and intervened to improve their teamwork (Huang et al., 2022). As a comparison, we also collected data from teams with a human advisor and with no advisor. We investigated the effects of human advisor interventions on team performance. In this study, we examined intervention data and compliance in a human-AI teaming experiment to gain insights into the efficacy of advisor interventions. The analysis categorized the types of interventions provided by a human advisor and the corresponding compliance. The finding of this paper is a preliminary step towards a comprehensive study on ASI agents, in which results from the human advisor study can provide valuable comparisons and insights. Future research will focus on analyzing ASI agents’ interventions to determine their effectiveness, identify the best measurements for human-AI teamwork effectiveness, and facilitate the development of ASI agents.

ContributorsHe, Xiaoyi (Author) / Huang, Lixiao (Thesis director) / Cooke, Nancy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
Description

Climate is a critical determinant of agricultural productivity, and the ability to accurately predict this productivity is necessary to provide guidance regarding food security and agricultural management. Previous predictions vary in approach due to the myriad of factors influencing agricultural productivity but generally suggest long-term declines in productivity and agricultural

Climate is a critical determinant of agricultural productivity, and the ability to accurately predict this productivity is necessary to provide guidance regarding food security and agricultural management. Previous predictions vary in approach due to the myriad of factors influencing agricultural productivity but generally suggest long-term declines in productivity and agricultural land suitability under climate change. In this paper, I relate predicted climate changes to yield for three major United States crops, namely corn, soybeans, and wheat, using a moderate emissions scenario. By adopting data-driven machine learning approaches, I used the following machine learning methods: random forest (RF), extreme gradient boosting (XGB), and artificial neural networks (ANN) to perform comparative analysis and ensemble methodology. I omitted the western US due to the region's susceptibility to water stress and the prevalence of artificial irrigation as a means to compensate for dry conditions. By considering only climate, the model's results suggest an ensemble mean decline in crop yield of 23.4\% for corn, 19.1\% for soybeans, and 7.8\% for wheat between the years of 2017 and 2100. These results emphasize potential negative impacts of climate change on the current agricultural industry as a result of shifting bio-climactic conditions.

ContributorsSwarup, Shray (Author) / Eikenberry, Steffen (Thesis director) / Mahalov, Alex (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

For my Thesis Project, I worked to operationalize an algorithmic trading application called Trading Dawg. Over the year, I was able to implement several analysis models, including accuracy, performance, volume, and hyperparameter analysis. With these improvements, we are in a strong position to create valuable tools in the algorithmic trading

For my Thesis Project, I worked to operationalize an algorithmic trading application called Trading Dawg. Over the year, I was able to implement several analysis models, including accuracy, performance, volume, and hyperparameter analysis. With these improvements, we are in a strong position to create valuable tools in the algorithmic trading space.

ContributorsPayne, Colton (Author) / Shakarian, Paulo (Thesis director) / Brandt, William (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Finance (Contributor)
Created2023-05
Description

Machine learning has a near infinite number of applications, of which the potential has yet to have been fully harnessed and realized. This thesis will outline two departments that machine learning can be utilized in, and demonstrate the execution of one methodology in each department. The first department that will

Machine learning has a near infinite number of applications, of which the potential has yet to have been fully harnessed and realized. This thesis will outline two departments that machine learning can be utilized in, and demonstrate the execution of one methodology in each department. The first department that will be described is self-play in video games, where a neural model will be researched and described that will teach a computer to complete a level of Super Mario World (1990) on its own. The neural model in question was inspired by the academic paper “Evolving Neural Networks through Augmenting Topologies”, which was written by Kenneth O. Stanley and Risto Miikkulainen of University of Texas at Austin. The model that will actually be described is from YouTuber SethBling of the California Institute of Technology. The second department that will be described is cybersecurity, where an algorithm is described from the academic paper “Process Based Volatile Memory Forensics for Ransomware Detection”, written by Asad Arfeen, Muhammad Asim Khan, Obad Zafar, and Usama Ahsan. This algorithm utilizes Python and the Volatility framework to detect malicious software in an infected system.

ContributorsBallecer, Joshua (Author) / Yang, Yezhou (Thesis director) / Luo, Yiran (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

Through my work with the Arizona State University Blockchain Research Lab (BRL) and JennyCo, one of the first Healthcare Information (HCI) HIPAA - compliant decentralized exchanges, I have had the opportunity to explore a unique cross-section of some of the most up and coming DLTs including both DAGs and blockchains.

Through my work with the Arizona State University Blockchain Research Lab (BRL) and JennyCo, one of the first Healthcare Information (HCI) HIPAA - compliant decentralized exchanges, I have had the opportunity to explore a unique cross-section of some of the most up and coming DLTs including both DAGs and blockchains. During this research, four major technologies (including JennyCo’s own systems) presented themselves as prime candidates for the comparative analysis of two models for implementing JennyCo’s system architecture for the monetization of healthcare information exchanges (HIEs). These four identified technologies and their underlying mechanisms will be explored thoroughly throughout the course of this paper and are listed with brief definitions as follows: Polygon - “Polygon is a “layer two” or “sidechain” scaling solution that runs alongside the Ethereum blockchain. MATIC is the network’s native cryptocurrency, which is used for fees, staking, and more” [8]. Polygon is the scalable layer involved in the L2SP architecture. Ethereum - “Ethereum is a decentralized blockchain platform that establishes a peer-to-peer network that securely executes and verifies application code, called smart contracts.” [9] This foundational Layer-1 runs thousands of nodes and creates a unique decentralized ecosystem governed by turing complete automated programs. Ethereum is the foundational Layer involved in the L2SP. Constellation - A novel Layer-0 data-centric peer-to-peer network that utilizes the “Hypergraph Transfer Protocol or HGTP, a DLT known as a [DAG] protocol with a novel reputation-based consensus model called Proof of Reputable Observation (PRO). Hypergraph is a feeless decentralized network that supports the transfer of $DAG cryptocurrency.” [10] JennyCo Protocol - Acts as a HIPAA compliant decentralized HIE by allowing consumers, big businesses, and brands to access and exchange user health data on a secure, interoperable, and accessible platform via DLT. The JennyCo Protocol implements utility tokens to reward buyers and sellers for exchanging data. Its protocol nature comes from its DLT implementation which governs the functioning of on-chain actions (e.g. smart contracts). In this case, these actions consist of secure and transparent health data exchange and monetization to reconstitute data ownership to those who generate that data [11]. With the direct experience of working closely with multiple companies behind the technologies listed, I have been exposed to the benefits and deficits of each of these technologies and their corresponding approaches. In this paper, I will use my experience with these technologies and their frameworks to explore two distributed ledger architecture protocols in order to determine the more effective model for implementing JennycCo’s health data exchange. I will begin this paper with an exploration of blockchain and directed acyclic graph (DAG) technologies to better understand their innate architectures and features. I will then move to an in-depth look at layered protocols, and healthcare data in the form of EHRs. Additionally, I will address the main challenges EHRs and HIEs face to present a deeper understanding of the challenges JennyCo is attempting to address. Finally, I will demonstrate my hypothesis: the Hypergraph Transfer Protocol (HGTP) model by Constellation presents significant advantages in scalability, interoperability, and external data security over the Layer-2 Scalability Protocol (L2SP) used by Polygon and Ethereum in implementing the JennyCo protocol. This will be done through a thorough breakdown of each protocol along with an analysis of relevant criteria including but not limited to: security, interoperability, and scalability. In doing so, I hope to determine the best framework for running JennyCo’s HIE Protocol.

ContributorsVan Bussum, Alexander (Author) / Boscovic, Dragan (Thesis director) / Grando, Adela (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

For my Spring 2022-23 Barrett Honors College creative project, I designed and created my own analog game. The created game is a tile-management game for 2-4 players called Plash. Players collect tiles and manipulate the board to complete goals and win the game. The paper for this project details the

For my Spring 2022-23 Barrett Honors College creative project, I designed and created my own analog game. The created game is a tile-management game for 2-4 players called Plash. Players collect tiles and manipulate the board to complete goals and win the game. The paper for this project details the inspirations and research done for the game’s design, the game's design journey, and detailed instructions on how to play.

ContributorsDavis, Jordan (Author) / Loebenberg, Abby (Thesis director) / Mack, Robert (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05