Matching Items (7)
Filtering by

Clear all filters

136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
133093-Thumbnail Image.png
Description
Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited

Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited due to the difficulty of finding good LDPC codes for practical parameters. This paper proposes an exhaustive and a randomized algorithm for constructing a family of LDPC codes with practical parameters whose matrix representations meet the following requirements: for each row in the LDPC code matrix there exists exactly one common nonzero element, each row has a minimum weight of one and must be odd, and each column has a weight of at least two. These conditions improve performance of the resulting codes and simplify conversion into codes for quantum systems. Both algorithms utilize a maximal clique algorithm to construct LDPC matrices from graphs whose vertices are possible rows within said matrices and are adjacent the first condition is true. While these algorithms were found to be suitable for small parameters, future work which optimizes the resulting codes for their expected applications could also dramatically increase performance of the algorithms themselves.
ContributorsShurman, Andrew Christian (Author) / Colbourn, Charles (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
148333-Thumbnail Image.png
Description

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal why interpretations are necessary to map the quantum world onto our classical world. We then introduce the Copenhagen interpretation, and how many-worlds differs from it. From there, we dive into the concepts of entanglement and decoherence, explaining how worlds branch in an Everettian universe, and how an Everettian universe can appear as our classical observed world. From there, we attempt to answer common questions about many-worlds and discuss whether there are philosophical ramifications to believing such a theory. Finally, we look at whether the many-worlds interpretation can be proven, and why one might choose to believe it.

ContributorsSecrest, Micah (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148341-Thumbnail Image.png
Description

The purpose of this paper is to provide an analysis of entanglement and the particular problems it poses for some physicists. In addition to looking at the history of entanglement and non-locality, this paper will use the Bell Test as a means for demonstrating how entanglement works, which measures the

The purpose of this paper is to provide an analysis of entanglement and the particular problems it poses for some physicists. In addition to looking at the history of entanglement and non-locality, this paper will use the Bell Test as a means for demonstrating how entanglement works, which measures the behavior of electrons whose combined internal angular momentum is zero. This paper will go over Dr. Bell's famous inequality, which shows why the process of entanglement cannot be explained by traditional means of local processes. Entanglement will be viewed initially through the Copenhagen Interpretation, but this paper will also look at two particular models of quantum mechanics, de-Broglie Bohm theory and Everett's Many-Worlds Interpretation, and observe how they explain the behavior of spin and entangled particles compared to the Copenhagen Interpretation.

ContributorsWood, Keaten Lawrence (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and

We implemented the well-known Ising model in one dimension as a computer program and simulated its behavior with four algorithms: (i) the seminal Metropolis algorithm; (ii) the microcanonical algorithm described by Creutz in 1983; (iii) a variation on Creutz’s time-reversible algorithm allowing for bonds between spins to change dynamically; and (iv) a combination of the latter two algorithms in a manner reflecting the different timescales on which these two processes occur (“freezing” the bonds in place for part of the simulation). All variations on Creutz’s algorithm were symmetrical in time, and thus reversible. The first three algorithms all favored low-energy states of the spin lattice and generated the Boltzmann energy distribution after reaching thermal equilibrium, as expected, while the last algorithm broke from the Boltzmann distribution while the bonds were “frozen.” The interpretation of this result as a net increase to the system’s total entropy is consistent with the second law of thermodynamics, which leads to the relationship between maximum entropy and the Boltzmann distribution.

ContributorsLewis, Aiden (Author) / Chamberlin, Ralph (Thesis director) / Beckstein, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2023-05
Description

The photodissociation of 1-bromobutane is explored using pump-probe spectroscopy and time-of-flight mass spectrometry. Fragments of bromobutane are constructed computationally and theoretical energies are calculated using Gaussian 16 software. It is determined that the dissociation of bromine from the parent molecule is the most observed fragmentation pathway arising from the excitation

The photodissociation of 1-bromobutane is explored using pump-probe spectroscopy and time-of-flight mass spectrometry. Fragments of bromobutane are constructed computationally and theoretical energies are calculated using Gaussian 16 software. It is determined that the dissociation of bromine from the parent molecule is the most observed fragmentation pathway arising from the excitation of the ground state parent molecule to a dissociative A state using two 400 nm, 3.1 eV pump photons. The dissociation energy of this pathway is 2.91 eV, leaving 3.3 eV of energy that is redistributed into the product fragments as vibrational energy. C4H9 has the highest relative intensity in the mass spectrum with a relative intensity of 1.00. It is followed by C2H5 and C2H4 at relative intensities of 0.73 and 0.29 respectively. Because of the negative correlation between C4H9 and these two fragments at positive time delays, it is concluded that most of these smaller molecules are formed from the further dissociation of the fragment C4H9 rather than any alternative pathways from the parent molecule. Thermodynamic analysis of these pathways has displayed the power of thermodynamic prediction as well as its limitations as it fails to consider kinetic limitations in dissociation reactions.

ContributorsGosman, Robert (Author) / Sayres, Scott (Thesis director) / Chizmeshya, Andrew (Committee member) / Barrett, The Honors College (Contributor) / Chemical Engineering Program (Contributor) / Department of Physics (Contributor)
Created2023-05
132590-Thumbnail Image.png
Description
Carbon allotropes are the basis for many exciting advancements in technology. While sp² and sp³ hybridizations are well understood, the sp¹ hybridized carbon has been elusive. However, with recent advances made using a pulsed laser ablation in liquid technique, sp¹ hybridized carbon allotropes have been created. The fabricated carbon chain

Carbon allotropes are the basis for many exciting advancements in technology. While sp² and sp³ hybridizations are well understood, the sp¹ hybridized carbon has been elusive. However, with recent advances made using a pulsed laser ablation in liquid technique, sp¹ hybridized carbon allotropes have been created. The fabricated carbon chain is composed of sp¹ and sp³ hybridized bonds, but it also incorporates nanoparticles such as gold or possibly silver to stabilize the chain. The polyyne generated in this process is called pseudocarbyne due to its striking resemblance to the theoretical carbyne. The formation of these carbon chains is yet to be fully understood, but significant progress has been made in determining the temperature of the plasma in which the pseudocarbyne is formed. When a 532 nm pulsed laser with a pulsed energy of 250 mJ and pulse length of 10ns is used to ablate a gold target, a peak temperature of 13400 K is measured. When measured using Laser-Induced Breakdown spectroscopy (LIBS) the average temperature of the neutral carbon plasma over one second was 4590±172 K. This temperature strongly suggests that the current theoretical model used to describe the temperature at which pseudocarbyne generates is accurate.
ContributorsWala, Ryland Gerald (Co-author) / Wala, Ryland (Co-author) / Sayres, Scott (Thesis director) / Steimle, Timothy (Committee member) / Drucker, Jeffery (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Department of Physics (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05