Filtering by
- Creators: School of Mathematical and Statistical Sciences
- Member of: Theses and Dissertations
- Status: Published
The classical double copy maps exact solutions of general relativity to exact solutions of U(1) Yang-Mills theory and suggests a hitherto unknown connection between gravity and gauge theory. In this thesis I study three problems using the Kerr-Schild and Weyl formulations of the classical double copy. Using the Kerr-Schild double copy, I analyze the single copy of a rotating nonsingular black hole and analyze its horizon structure to probe the relationship between the presence of horizons on the gravity side and the single copy field on the gauge theory side. In the second problem I describe the mapping between the surface gravity of static spherically symmetric black holes and the force on a test particle due to the single copy field of the black hole. I also describe potential routes to extending this map to rotating black holes. Finally, inspired by the extended Weyl double copy for spacetimes possessing sources, I reinterpret the single copy of the Taub- NUT metric as being comprised of two terms each being sourced by a separate parameter (the mass and the NUT charge).
The development of novel aqueous cross-coupling strategies has emerged as a rapidly expanding area of research within organic synthesis. However, many of these cross-coupling reactions require the pre-formation of an organohalide substrate, which often involves toxic halogenating reagents and harsh reaction conditions. This work details the development of a tandem halogenation/cross-coupling procedure in which an electron-rich arene or heteroarene is brominated through an enzymatic halogenation reaction catalyzed by a vanadium dependent haloperoxidase (VHPO) and then used without workup in a subsequent aqueous Suzuki cross-coupling reaction. This sequential process allows the arylated product to be accessed in a single pot from the unfunctionalized substrate via the brominated intermediate. Optimization of the enzymatic halogenation step was performed for three different substrates, resulting in the discovery of conditions for the bromination of 2,3-dihydrobenzofuran, chromane, and anisole in high yield (>95%). The scope of the reaction was then investigated for a range of electron-rich arene and heteroarene substrates. Next, Suzuki cross-coupling conditions were developed in a reaction mixture of pH 5 citrate buffer and acetonitrile and applied to the arylation of 2,3-dihydrobenzofuran utilizing an array of arylboronic acid coupling partners. Finally, the two procedures were combined to perform a tandem enzymatic halogenation/aqueous Suzuki cross-coupling of 2,3-dihydrobenzofuran to give the arylated product in 74% yield.
The focus of my honors thesis is to find ways to use deep learning in tandem with tools in statistical mechanics to derive new ways to solve problems in biophysics. More specifically, I’ve been interested in finding transition pathways between two known states of a biomolecule. This is because understanding the mechanisms in which proteins fold and ligands bind is crucial to creating new medicines and understanding biological processes. In this thesis, I work with individuals in the Singharoy lab to develop a formulation to utilize reinforcement learning and sampling-based robotics planning to derive low free energy transition pathways between two known states. Our formulation uses Jarzynski’s equality and the stiff-spring approximation to obtain point estimates of energy, and construct an informed path search with atomistic resolution. At the core of this framework, is our first ever attempt we use a policy driven adaptive steered molecular dynamics (SMD) to control our molecular dynamics simulations. We show that both the reinforcement learning (RL) and robotics planning realization of the RL-guided framework can solve for pathways on toy analytical surfaces and alanine dipeptide.
For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to use the model. These services are hosted on ASU's AWS service. In my Flask API, it actively gathers data from Pro-Football-Reference, then calculates the fantasy points. Let’s say the current year is 2022, then the model analyzes each player and trains on all data from available from 2000 to 2020 data, tests the data on 2021 data, and predicts for 2022 year. The Django Website asks the user to input the current year, then the user clicks the submit button runs the AI model, and the process explained earlier. Next, the user enters the player's name for the point prediction and the website predicts the last 5 rows with 4 being the previous fantasy points and the 5th row being the prediction.
This thesis is an exploration of my imaginary world through a short narrative with a focus on placemaking in fiction. The narrative follows Dengar, a civil servant estranged from the central government, as he investigates disappearances occurring in the edges of the empire, uncovering secrets related to the empire’s past and the past of the conquered people of Thron. He must navigate a bitter, cold landscape and a dangerous resistance group as he learns more about the real reason behind why he was sent there. Schemes are uncovered and foiled as he makes his way into the core base of the resistance, a towering mountain called Diran. Following the narrative, I explain my inspirations and analyze my narrative from the perspective of placemaking, referring to placemaking scholars such as Basso and Whitridge.
Visualizations can be an incredibly powerful tool for communicating data. Data visualizations can summarize large data sets into one view, allow for easy comparisons between variables, and show trends or relationships in data that cannot be seen by looking at the raw data. Empirical information and by extension data visualizations are often seen as objective and honest. Unfortunately, data visualizations are susceptible to errors that may make them misleading. When visualizations are made for public audiences that do not have the statistical training or subject matter expertise to identify misleading or misrepresented data, these errors can have very negative effects. There is a good deal of research on how best to create guidelines for creating or systems for evaluating data visualizations. Many of the existing guidelines have contradicting approaches to designing visuals or they stress that best practices depend on the context. The goal of this work is to define the guidelines for making visualizations in the context of a public audience and show how context-specific guidelines can be used to effectively evaluate and critique visualizations. The guidelines created here are a starting point to show that there is a need for best practices that are specific to public media. Data visualization for the public lies at the intersection of statistics, graphic design, journalism, cognitive science, and rhetoric. Because of this, future conversations to create guidelines should include representatives of all these fields.