Matching Items (15)
Filtering by

Clear all filters

151369-Thumbnail Image.png
Description
This thesis addresses certain quantum aspects of the event horizon using the AdS/CFT correspondence. This correspondence is profound since it describes a quantum theory of gravity in d + 1 dimensions from the perspective of a dual quantum field theory living in d dimensions. We begin by considering Rindler space

This thesis addresses certain quantum aspects of the event horizon using the AdS/CFT correspondence. This correspondence is profound since it describes a quantum theory of gravity in d + 1 dimensions from the perspective of a dual quantum field theory living in d dimensions. We begin by considering Rindler space which is the part of Minkowski space seen by an observer with a constant proper acceleration. Because it has an event horizon, Rindler space has been studied in great detail within the context of quantum field theory. However, a quantum gravitational treatment of Rindler space is handicapped by the fact that quantum gravity in flat space is poorly understood. By contrast, quantum gravity in anti-de Sitter space (AdS), is relatively well understood via the AdS/CFT correspondence. Taking this cue, we construct Rindler coordinates for AdS (Rindler-AdS space) in d + 1 spacetime dimensions. In three spacetime dimensions, we find novel one-parameter families of stationary vacua labeled by a rotation parameter β. The interesting thing about these rotating Rindler-AdS spaces is that they possess an observer-dependent ergoregion in addition to having an event horizon. Turning next to the application of AdS/CFT correspondence to Rindler-AdS space, we posit that the two Rindler wedges in AdSd+1 are dual to an entangled conformal field theory (CFT) that lives on two boundaries with geometry R × Hd-1. Specializing to three spacetime dimensions, we derive the thermodynamics of Rindler-AdS space using the boundary CFT. We then probe the causal structure of the spacetime by sending in a time-like source and observe that the CFT “knows” when the source has fallen past the Rindler horizon. We conclude by proposing an alternate foliation of Rindler-AdS which is dual to a CFT living in de Sitter space. Towards the end, we consider the concept of weak measurements in quantum mechanics, wherein the measuring instrument is weakly coupled to the system being measured. We consider such measurements in the context of two examples, viz. the decay of an excited atom, and the tunneling of a particle trapped in a well, and discuss the salient features of such measurements.
ContributorsSamantray, Prasant (Author) / Parikh, Maulik (Thesis advisor) / Davies, Paul (Committee member) / Vachaspati, Tanmay (Committee member) / Easson, Damien (Committee member) / Alarcon, Ricardo (Committee member) / Arizona State University (Publisher)
Created2012
153320-Thumbnail Image.png
Description
This thesis explores the different aspects of higher curvature gravity. The "membrane paradigm" of black holes in Einstein gravity is extended to black holes in f(R) gravity and it is shown that the higher curvature effects of f(R) gravity causes the membrane fluid to become non-Newtonian. Next a modification of

This thesis explores the different aspects of higher curvature gravity. The "membrane paradigm" of black holes in Einstein gravity is extended to black holes in f(R) gravity and it is shown that the higher curvature effects of f(R) gravity causes the membrane fluid to become non-Newtonian. Next a modification of the null energy condition in gravity is provided. The purpose of the null energy condition is to filter out ill-behaved theories containing ghosts. Conformal transformations, which are simple redefinitions of the spacetime, introduces serious violations of the null energy condition. This violation is shown to be spurious and a prescription for obtaining a modified null energy condition, based on the universality of the second law of thermodynamics, is provided. The thermodynamic properties of the black holes are further explored using merger of extremal black holes whose horizon entropy has topological contributions coming from the higher curvature Gauss-Bonnet term. The analysis refutes the prevalent belief in the literature that the second law of black hole thermodynamics is violated in the presence of the Gauss-Bonnet term in four dimensions. Subsequently a specific class of higher derivative scalar field theories called the galileons are obtained from a Kaluza-Klein reduction of Gauss-Bonnet gravity. Galileons are null energy condition violating theories which lead to violations of the second law of thermodynamics of black holes. These higher derivative scalar field theories which are non-minimally coupled to gravity required the development of a generalized method for obtaining the equations of motion. Utilizing this generalized method, it is shown that the inclusion of the Gauss-Bonnet term made the theory of gravity to become higher derivative, which makes it difficult to make any statements about the connection between the violation of the second law of thermodynamics and the galileon fields.
ContributorsChatterjee, Saugata (Author) / Parikh, Maulik K (Thesis advisor) / Easson, Damien (Committee member) / Davies, Paul (Committee member) / Arizona State University (Publisher)
Created2014
150316-Thumbnail Image.png
Description
The nucleon resonance spectrum consists of many overlapping excitations. Polarization observables are an important tool for understanding and clarifying these spectra. While there is a large data base of differential cross sections for the process, very few data exist for polarization observables. A program of double polarization experiments has been

The nucleon resonance spectrum consists of many overlapping excitations. Polarization observables are an important tool for understanding and clarifying these spectra. While there is a large data base of differential cross sections for the process, very few data exist for polarization observables. A program of double polarization experiments has been conducted at Jefferson Lab using a tagged polarized photon beam and a frozen spin polarized target (FROST). The results presented here were taken during the first running period of FROST using the CLAS detector at Jefferson Lab with photon energies ranging from 329 MeV to 2.35 GeV. Data are presented for the E polarization observable for eta meson photoproduction on the proton from threshold (W=1500 MeV) to W=1900 MeV. Comparisons to the partial wave analyses of SAID and Bonn-Gatchina along with the isobar analysis of eta-MAID are made. These results will help distinguish between current theoretical predictions and refine future theories.
ContributorsMorrison, Brian (Author) / Ritchie, Barry (Thesis advisor) / Dugger, Michael (Committee member) / Shovkovy, Igor (Committee member) / Davies, Paul (Committee member) / Alarcon, Ricardo (Committee member) / Arizona State University (Publisher)
Created2011
135853-Thumbnail Image.png
Description
The longstanding issue of how much time it takes a particle to tunnel through quantum barriers is discussed; in particular, the phenomenon known as the Hartman effect is reviewed. A calculation of the dwell time for two successive rectangular barriers in the opaque limit is given and the result depends

The longstanding issue of how much time it takes a particle to tunnel through quantum barriers is discussed; in particular, the phenomenon known as the Hartman effect is reviewed. A calculation of the dwell time for two successive rectangular barriers in the opaque limit is given and the result depends on the barrier widths and hence does not lead to superluminal tunneling or the Hartman effect.
ContributorsMcDonald, Scott (Author) / Davies, Paul (Thesis director) / Comfort, Joseph (Committee member) / McCartney, M. R. (Committee member) / Barrett, The Honors College (Contributor)
Created2009-05
136199-Thumbnail Image.png
Description
Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the

Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the cells of multicellular organisms which arises when the cell is subjected to an unusual amount of stress. Since this default phenotype is similar across cell types and even organisms, it seems it must be an evolutionarily ancestral phenotype. We take a phylostratigraphical approach, but systematically add species divergence time data to estimate gene ages numerically and use these ages to investigate the ages of genes involved in cancer. We find that ancient disease-recessive cancer genes are significantly enriched for DNA repair and SOS activity, which seems to imply that a core component of cancer development is not the regulation of growth, but the regulation of mutation. Verification of this finding could drastically improve cancer treatment and prevention.
ContributorsOrr, Adam James (Author) / Davies, Paul (Thesis director) / Bussey, Kimberly (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Life Sciences (Contributor)
Created2015-05
148333-Thumbnail Image.png
Description

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal why interpretations are necessary to map the quantum world onto our classical world. We then introduce the Copenhagen interpretation, and how many-worlds differs from it. From there, we dive into the concepts of entanglement and decoherence, explaining how worlds branch in an Everettian universe, and how an Everettian universe can appear as our classical observed world. From there, we attempt to answer common questions about many-worlds and discuss whether there are philosophical ramifications to believing such a theory. Finally, we look at whether the many-worlds interpretation can be proven, and why one might choose to believe it.

ContributorsSecrest, Micah (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
187562-Thumbnail Image.png
Description
Much attention has been given to the behavior of quantum fields in expanding Freidmann-Lema\^itre-Robertson-Walker (FLRW) spacetimes, and de Sitter spacetime in particular. In such spacetimes, the S-matrix is ill-defined, so new observables must be constructed that are accessible to both computation and measurement. The most common observable in theories of

Much attention has been given to the behavior of quantum fields in expanding Freidmann-Lema\^itre-Robertson-Walker (FLRW) spacetimes, and de Sitter spacetime in particular. In such spacetimes, the S-matrix is ill-defined, so new observables must be constructed that are accessible to both computation and measurement. The most common observable in theories of inflation is an equal-time correlation function, typically computed in the in-in formalism. Weinberg improved upon in-in perturbation theory by reducing the perturbative expansion to a series of nested commutators. Several authors noted a technical difference between Weinberg's formula and standard in-in perturbation theory. In this work, a proof of the order-by-order equivalence of Weinberg's commutators to traditional in-in perturbation theory is presented for all masses and commonly studied spins in a broad class of FLRW spacetimes. Then, a study of the effects of a sector of conformal matter coupled solely to gravity is given. The results can constrain N-naturalness as a complete solution of the hierarchy problem, given a measurement of the tensor fluctuations from inflation. The next part of this work focuses on the thermodynamics of de Sitter. It has been known for decades that there is a temperature associated with a cosmological horizon, which matches the thermal response of a comoving particle detector in de Sitter. A model of a perfectly reflecting cavity is constructed with fixed physical size in two-dimensional de Sitter spacetime. The natural ground state inside the box yields no response from a comoving particle detector, implying that the box screens out the thermal effects of the de Sitter horizon. The total energy inside the box is also shown to be smaller than an equivalent volume of the Bunch-Davies vacuum state. The temperature difference across the wall of the box might drive a heat engine, so an analytical model of the Szil\'ard engine is constructed and studied. It is found that all relevant thermodynamical quantities can be computed exactly at all stages of the engine cycle.
ContributorsThomas, Logan (Author) / Baumgart, Matthew (Thesis advisor) / Davies, Paul (Committee member) / Easson, Damien (Committee member) / Keeler, Cynthia (Committee member) / Arizona State University (Publisher)
Created2023
158889-Thumbnail Image.png
Description
A swarm describes a group of interacting agents exhibiting complex collective behaviors. Higher-level behavioral patterns of the group are believed to emerge from simple low-level rules of decision making at the agent-level. With the potential application of swarms of aerial drones, underwater robots, and other multi-robot systems, there has been

A swarm describes a group of interacting agents exhibiting complex collective behaviors. Higher-level behavioral patterns of the group are believed to emerge from simple low-level rules of decision making at the agent-level. With the potential application of swarms of aerial drones, underwater robots, and other multi-robot systems, there has been increasing interest in approaches for specifying complex, collective behavior for artificial swarms. Traditional methods for creating artificial multi-agent behaviors inspired by known swarms analyze the underlying dynamics and hand craft low-level control logics that constitute the emerging behaviors. Deep learning methods offered an approach to approximate the behaviors through optimization without much human intervention.

This thesis proposes a graph based neural network architecture, SwarmNet, for learning the swarming behaviors of multi-agent systems. Given observation of only the trajectories of an expert multi-agent system, the SwarmNet is able to learn sensible representations of the internal low-level interactions on top of being able to approximate the high-level behaviors and make long-term prediction of the motion of the system. Challenges in scaling the SwarmNet and graph neural networks in general are discussed in detail, along with measures to alleviate the scaling issue in generalization is proposed. Using the trained network as a control policy, it is shown that the combination of imitation learning and reinforcement learning improves the policy more efficiently. To some extent, it is shown that the low-level interactions are successfully identified and separated and that the separated functionality enables fine controlled custom training.
ContributorsZhou, Siyu (Author) / Ben Amor, Heni (Thesis advisor) / Walker, Sara I (Thesis advisor) / Davies, Paul (Committee member) / Pavlic, Ted (Committee member) / Presse, Steve (Committee member) / Arizona State University (Publisher)
Created2020
190707-Thumbnail Image.png
Description
Scientific research encompasses a variety of objectives, including measurement, making predictions, identifying laws, and more. The advent of advanced measurement technologies and computational methods has largely automated the processes of big data collection and prediction. However, the discovery of laws, particularly universal ones, still heavily relies on human intellect. Even

Scientific research encompasses a variety of objectives, including measurement, making predictions, identifying laws, and more. The advent of advanced measurement technologies and computational methods has largely automated the processes of big data collection and prediction. However, the discovery of laws, particularly universal ones, still heavily relies on human intellect. Even with human intelligence, complex systems present a unique challenge in discerning the laws that govern them. Even the preliminary step, system description, poses a substantial challenge. Numerous metrics have been developed, but universally applicable laws remain elusive. Due to the cognitive limitations of human comprehension, a direct understanding of big data derived from complex systems is impractical. Therefore, simplification becomes essential for identifying hidden regularities, enabling scientists to abstract observations or draw connections with existing knowledge. As a result, the concept of macrostates -- simplified, lower-dimensional representations of high-dimensional systems -- proves to be indispensable. Macrostates serve a role beyond simplification. They are integral in deciphering reusable laws for complex systems. In physics, macrostates form the foundation for constructing laws and provide building blocks for studying relationships between quantities, rather than pursuing case-by-case analysis. Therefore, the concept of macrostates facilitates the discovery of regularities across various systems. Recognizing the importance of macrostates, I propose the relational macrostate theory and a machine learning framework, MacroNet, to identify macrostates and design microstates. The relational macrostate theory defines a macrostate based on the relationships between observations, enabling the abstraction from microscopic details. In MacroNet, I propose an architecture to encode microstates into macrostates, allowing for the sampling of microstates associated with a specific macrostate. My experiments on simulated systems demonstrate the effectiveness of this theory and method in identifying macrostates such as energy. Furthermore, I apply this theory and method to a complex chemical system, analyzing oil droplets with intricate movement patterns in a Petri dish, to answer the question, ``which combinations of parameters control which behavior?'' The macrostate theory allows me to identify a two-dimensional macrostate, establish a mapping between the chemical compound and the macrostate, and decipher the relationship between oil droplet patterns and the macrostate.
ContributorsZhang, Yanbo (Author) / Walker, Sara I (Thesis advisor) / Anbar, Ariel (Committee member) / Daniels, Bryan (Committee member) / Das, Jnaneshwar (Committee member) / Davies, Paul (Committee member) / Arizona State University (Publisher)
Created2023
130295-Thumbnail Image.png
Description

Cancer is sometimes depicted as a reversion to single cell behavior in cells adapted to live in a multicellular assembly. If this is the case, one would expect that mutation in cancer disrupts functional mechanisms that suppress cell-level traits detrimental to multicellularity. Such mechanisms should have evolved with or after

Cancer is sometimes depicted as a reversion to single cell behavior in cells adapted to live in a multicellular assembly. If this is the case, one would expect that mutation in cancer disrupts functional mechanisms that suppress cell-level traits detrimental to multicellularity. Such mechanisms should have evolved with or after the emergence of multicellularity. This leads to two related, but distinct hypotheses: 1) Somatic mutations in cancer will occur in genes that are younger than the emergence of multicellularity (1000 million years [MY]); and 2) genes that are frequently mutated in cancer and whose mutations are functionally important for the emergence of the cancer phenotype evolved within the past 1000 million years, and thus would exhibit an age distribution that is skewed to younger genes. In order to investigate these hypotheses we estimated the evolutionary ages of all human genes and then studied the probability of mutation and their biological function in relation to their age and genomic location for both normal germline and cancer contexts.

We observed that under a model of uniform random mutation across the genome, controlled for gene size, genes less than 500 MY were more frequently mutated in both cases. Paradoxically, causal genes, defined in the COSMIC Cancer Gene Census, were depleted in this age group. When we used functional enrichment analysis to explain this unexpected result we discovered that COSMIC genes with recessive disease phenotypes were enriched for DNA repair and cell cycle control. The non-mutated genes in these pathways are orthologous to those underlying stress-induced mutation in bacteria, which results in the clustering of single nucleotide variations. COSMIC genes were less common in regions where the probability of observing mutational clusters is high, although they are approximately 2-fold more likely to harbor mutational clusters compared to other human genes. Our results suggest this ancient mutational response to stress that evolved among prokaryotes was co-opted to maintain diversity in the germline and immune system, while the original phenotype is restored in cancer. Reversion to a stress-induced mutational response is a hallmark of cancer that allows for effectively searching “protected” genome space where genes causally implicated in cancer are located and underlies the high adaptive potential and concomitant therapeutic resistance that is characteristic of cancer.

Created2017-04-25