Matching Items (16)
Filtering by

Clear all filters

136488-Thumbnail Image.png
Description
We develop the mathematical tools necessary to describe the interaction between a resonant pole and a threshold energy. Using these tools, we analyze the properties an opening threshold has on the resonant pole mass (the "cusp effect"), leading to an effect called "pole-dragging." We consider two models for resonances: a

We develop the mathematical tools necessary to describe the interaction between a resonant pole and a threshold energy. Using these tools, we analyze the properties an opening threshold has on the resonant pole mass (the "cusp effect"), leading to an effect called "pole-dragging." We consider two models for resonances: a molecular, mesonic model, and a color-nonsinglet diquark plus antidiquark model. Then, we compare the pole-dragging effect due to these models on the masses of the f0(980), the X(3872), and the Zb(10610) and compare the effect's magnitude. We find that, while for lower masses, such as the f 0 (980), the pole-dragging effect that arises from the molecular model is more significant, the diquark model's pole-dragging effect becomes dominant at higher masses such as those of the X(3872) and the Z b (10610). This indicates that for lower threshold energies, diquark models may have less significant effects on predicted resonant masses than mesonic models, but for higher threshold energies, it is necessary to include the pole-dragging effect due to a diquark threshold in high-precision QCD calculations.
ContributorsBlitz, Samuel Harris (Author) / Richard, Lebed (Thesis director) / Comfort, Joseph (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2015-05
136114-Thumbnail Image.png
Description
Preliminary feasibility studies for two possible experiments with the GlueX detector, installed in Hall D of Jefferson Laboratory, are presented. First, a general study of the feasibility of detecting the ηC at the current hadronic rate is discussed, without regard for detector or reconstruction efficiency. Second, a study of the

Preliminary feasibility studies for two possible experiments with the GlueX detector, installed in Hall D of Jefferson Laboratory, are presented. First, a general study of the feasibility of detecting the ηC at the current hadronic rate is discussed, without regard for detector or reconstruction efficiency. Second, a study of the use of statistical methods in studying exotic meson candidates is outlined, describing methods and providing preliminary data on their efficacy.
ContributorsPrather, Benjamin Scott (Author) / Ritchie, Barry G. (Thesis director) / Dugger, Michael (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
136216-Thumbnail Image.png
Description
In this paper, optimal control routines are applied to an existing problem of electron state transfer to determine if spin information can successfully be moved across a chain of donor atoms in silicon. The additional spin degrees of freedom are introduced into the formulation of the problem as well as

In this paper, optimal control routines are applied to an existing problem of electron state transfer to determine if spin information can successfully be moved across a chain of donor atoms in silicon. The additional spin degrees of freedom are introduced into the formulation of the problem as well as the control optimization algorithm. We find a timescale of transfer for spin quantum information across the chain fitting with a t > π/A and t > 2π/A transfer pulse time corresponding with rotation of states on the electron Bloch sphere where A is the electron-nuclear coupling constant. Introduction of a magnetic field weakens transfer
efficiencies at high field strengths and prohibits anti-aligned nuclear states from transferring. We also develop a rudimentary theoretical model based on simulated results and partially validate the characteristic transfer times for spin states. This model also establishes a framework for future work including the introduction of a magnetic field.
ContributorsMorgan, Eric Robert (Author) / Treacy, Michael (Thesis director) / Whaley, K. Birgitta (Committee member) / Greenman, Loren (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
132440-Thumbnail Image.png
Description
In this experiment an Electrodynamic Ion Ring Trap was constructed and tested. Due to the nature of Electrostatic fields, the setup required an oscillating voltage source to stably trap the particles. It was built in a safe manner, The power supply was kept in a project box to avoid incidental

In this experiment an Electrodynamic Ion Ring Trap was constructed and tested. Due to the nature of Electrostatic fields, the setup required an oscillating voltage source to stably trap the particles. It was built in a safe manner, The power supply was kept in a project box to avoid incidental contact, and was connected to a small copper wire in the shape of a ring. The maximum voltage that could be experienced via incidental contact was well within safe ranges a 0.3mA. Within minutes of its completion the trap was able to trap small Lycopodium powder spores mass of approximately 1.7*10^{-11}kg in clusters of 15-30 for long timescales. The oscillations of these spores were observed to be roughly 1.01mm at their maximum, and in an attempt to understand the dynamics of the Ion Trap, a concept called the pseudo-potential of the trap was used. This method proved fairly inaccurate, involving much estimation and using a static field estimation of 9.39*10^8 N\C and a charge estimate on the particles of ~1e, a maximum oscillation distance of 1.37m was calculated. Though the derived static field strength was not far off from the field strength required to achieve the correct oscillation distance (Percent error of 9.92%, the small discrepancy caused major calculation errors. The trap's intended purpose however was to eventually trap protein molecules for mapping via XFEL laser, and after its successful construction that goal is fairly achievable. The trap was also housed in a vacuum chamber so that it could be more effectively implemented with the XFEL.
ContributorsNicely, Ryan Joseph (Author) / Kirian, Richard (Thesis director) / Weiterstall, Uwe (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
133977-Thumbnail Image.png
Description
Within the context of the Finite-Difference Time-Domain (FDTD) method of simulating interactions between electromagnetic waves and matter, we adapt a known absorbing boundary condition, the Convolutional Perfectly-Matched Layer (CPML) to a background of Drude-dispersive medium. The purpose of this CPML is to terminate the virtual grid of scattering simulations by

Within the context of the Finite-Difference Time-Domain (FDTD) method of simulating interactions between electromagnetic waves and matter, we adapt a known absorbing boundary condition, the Convolutional Perfectly-Matched Layer (CPML) to a background of Drude-dispersive medium. The purpose of this CPML is to terminate the virtual grid of scattering simulations by absorbing all outgoing radiation. In this thesis, we exposit the method of simulation, establish the Perfectly-Matched Layer as a domain which houses a spatial-coordinate transform to the complex plane, construct the CPML in vacuum, adapt the CPML to the Drude medium, and conclude with tests of the adapted CPML for two different scattering geometries.
ContributorsThornton, Brandon Maverick (Author) / Sukharev, Maxim (Thesis director) / Goodnick, Stephen (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135129-Thumbnail Image.png
Description
A working knowledge of mathematics is a vital requirement for introductory university physics courses. However, there is mounting evidence which shows that many incoming introductory physics students do not have the necessary mathematical ability to succeed in physics. The investigation reported in this thesis used preinstruction diagnostics and interviews to

A working knowledge of mathematics is a vital requirement for introductory university physics courses. However, there is mounting evidence which shows that many incoming introductory physics students do not have the necessary mathematical ability to succeed in physics. The investigation reported in this thesis used preinstruction diagnostics and interviews to examine this problem in depth. It was found that in some cases, over 75% of students could not solve the most basic mathematics problems. We asked questions involving right triangles, vector addition, vector direction, systems of equations, and arithmetic, to give a few examples. The correct response rates were typically between 25% and 75%, which is worrying, because these problems are far simpler than the typical problem encountered in an introductory quantitative physics course. This thesis uncovered a few common problem solving strategies that were not particularly effective. When solving trigonometry problems, 13% of students wrote down the mnemonic "SOH CAH TOA," but a chi-squared test revealed that this was not a statistically significant factor in getting the correct answer, and was actually detrimental in certain situations. Also, about 50% of students used a tip-to-tail method to add vectors. But there is evidence to suggest that this method is not as effective as using components. There are also a number of problem solving strategies that successful students use to solve mathematics problems. Using the components of a vector increases student success when adding vectors and examining their direction. Preliminary evidence also suggests that repetitive trigonometry practice may be the best way to improve student performance on trigonometry problems. In addition, teaching students to use a wide variety of algebraic techniques like the distributive property may help them from getting stuck when working through problems. Finally, evidence suggests that checking work could eliminate up to a third of student errors.
ContributorsJones, Matthew Isaiah (Author) / Meltzer, David (Thesis director) / Peng, Xihong (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The self-assembly of strongly-coupled nanocrystal superlattices, as a convenient bottom-up synthesis technique featuring a wide parameter space, is at the forefront of next-generation material design. To realize the full potential of such tunable, functional materials, a more complete understanding of the self-assembly process and the artificial crystals it produces is

The self-assembly of strongly-coupled nanocrystal superlattices, as a convenient bottom-up synthesis technique featuring a wide parameter space, is at the forefront of next-generation material design. To realize the full potential of such tunable, functional materials, a more complete understanding of the self-assembly process and the artificial crystals it produces is required. In this work, we discuss the results of a hard coherent X-ray scattering experiment at the Linac Coherent Light Source, observing superlattices long after their initial nucleation. The resulting scattering intensity correlation functions have dispersion suggestive of a disordered crystalline structure and indicate the occurrence of rapid, strain-relieving events therein. We also present real space reconstructions of individual superlattices obtained via coherent diffractive imaging. Through this analysis we thus obtain high-resolution structural and dynamical information of self-assembled superlattices in their native liquid environment.

ContributorsHurley, Matthew (Author) / Teitelbaum, Samuel (Thesis director) / Ginsberg, Naomi (Committee member) / Kirian, Richard (Committee member) / Barrett, The Honors College (Contributor) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor)
Created2023-05
Description

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and (iv) a combination of the latter two algorithms in a manner reflecting the different timescales on which these two processes occur (“freezing” the bonds in place for part of the simulation). All variations on Creutz’s algorithm were symmetrical in time, and thus reversible. The first three algorithms all favored low-energy states of the spin lattice and generated the Boltzmann energy distribution after reaching thermal equilibrium, as expected, while the last algorithm broke from the Boltzmann distribution while the bonds were “frozen.” The interpretation of this result as a net increase to the system’s total entropy is consistent with the second law of thermodynamics, which leads to the relationship between maximum entropy and the Boltzmann distribution.

ContributorsLewis, Aiden (Author) / Chamberlin, Ralph (Thesis director) / Beckstein, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2023-05