Filtering by
- All Subjects: MATLAB
- Creators: School of Mathematical and Statistical Sciences
- Creators: Woodbury, Neal
- Member of: Barrett, The Honors College Thesis/Creative Project Collection
- Resource Type: Text
This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal why interpretations are necessary to map the quantum world onto our classical world. We then introduce the Copenhagen interpretation, and how many-worlds differs from it. From there, we dive into the concepts of entanglement and decoherence, explaining how worlds branch in an Everettian universe, and how an Everettian universe can appear as our classical observed world. From there, we attempt to answer common questions about many-worlds and discuss whether there are philosophical ramifications to believing such a theory. Finally, we look at whether the many-worlds interpretation can be proven, and why one might choose to believe it.
The purpose of this paper is to provide an analysis of entanglement and the particular problems it poses for some physicists. In addition to looking at the history of entanglement and non-locality, this paper will use the Bell Test as a means for demonstrating how entanglement works, which measures the behavior of electrons whose combined internal angular momentum is zero. This paper will go over Dr. Bell's famous inequality, which shows why the process of entanglement cannot be explained by traditional means of local processes. Entanglement will be viewed initially through the Copenhagen Interpretation, but this paper will also look at two particular models of quantum mechanics, de-Broglie Bohm theory and Everett's Many-Worlds Interpretation, and observe how they explain the behavior of spin and entangled particles compared to the Copenhagen Interpretation.
The field of biomedical research relies on the knowledge of binding interactions between various proteins of interest to create novel molecular targets for therapeutic purposes. While many of these interactions remain a mystery, knowledge of these properties and interactions could have significant medical applications in terms of understanding cell signaling and immunological defenses. Furthermore, there is evidence that machine learning and peptide microarrays can be used to make reliable predictions of where proteins could interact with each other without the definitive knowledge of the interactions. In this case, a neural network was used to predict the unknown binding interactions of TNFR2 onto LT-ɑ and TRAF2, and PD-L1 onto CD80, based off of the binding data from a sampling of protein-peptide interactions on a microarray. The accuracy and reliability of these predictions would rely on future research to confirm the interactions of these proteins, but the knowledge from these methods and predictions could have a future impact with regards to rational and structure-based drug design.