Matching Items (5)
Filtering by

Clear all filters

136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
133093-Thumbnail Image.png
Description
Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited

Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited due to the difficulty of finding good LDPC codes for practical parameters. This paper proposes an exhaustive and a randomized algorithm for constructing a family of LDPC codes with practical parameters whose matrix representations meet the following requirements: for each row in the LDPC code matrix there exists exactly one common nonzero element, each row has a minimum weight of one and must be odd, and each column has a weight of at least two. These conditions improve performance of the resulting codes and simplify conversion into codes for quantum systems. Both algorithms utilize a maximal clique algorithm to construct LDPC matrices from graphs whose vertices are possible rows within said matrices and are adjacent the first condition is true. While these algorithms were found to be suitable for small parameters, future work which optimizes the resulting codes for their expected applications could also dramatically increase performance of the algorithms themselves.
ContributorsShurman, Andrew Christian (Author) / Colbourn, Charles (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
148333-Thumbnail Image.png
Description

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal

This thesis attempts to explain Everettian quantum mechanics from the ground up, such that those with little to no experience in quantum physics can understand it. First, we introduce the history of quantum theory, and some concepts that make up the framework of quantum physics. Through these concepts, we reveal why interpretations are necessary to map the quantum world onto our classical world. We then introduce the Copenhagen interpretation, and how many-worlds differs from it. From there, we dive into the concepts of entanglement and decoherence, explaining how worlds branch in an Everettian universe, and how an Everettian universe can appear as our classical observed world. From there, we attempt to answer common questions about many-worlds and discuss whether there are philosophical ramifications to believing such a theory. Finally, we look at whether the many-worlds interpretation can be proven, and why one might choose to believe it.

ContributorsSecrest, Micah (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148341-Thumbnail Image.png
Description

The purpose of this paper is to provide an analysis of entanglement and the particular problems it poses for some physicists. In addition to looking at the history of entanglement and non-locality, this paper will use the Bell Test as a means for demonstrating how entanglement works, which measures the

The purpose of this paper is to provide an analysis of entanglement and the particular problems it poses for some physicists. In addition to looking at the history of entanglement and non-locality, this paper will use the Bell Test as a means for demonstrating how entanglement works, which measures the behavior of electrons whose combined internal angular momentum is zero. This paper will go over Dr. Bell's famous inequality, which shows why the process of entanglement cannot be explained by traditional means of local processes. Entanglement will be viewed initially through the Copenhagen Interpretation, but this paper will also look at two particular models of quantum mechanics, de-Broglie Bohm theory and Everett's Many-Worlds Interpretation, and observe how they explain the behavior of spin and entangled particles compared to the Copenhagen Interpretation.

ContributorsWood, Keaten Lawrence (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132360-Thumbnail Image.png
Description
We consider programmable matter as a collection of simple computational elements (or particles) that self-organize to solve system-wide problems of movement, configuration, and coordination. Here, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in

We consider programmable matter as a collection of simple computational elements (or particles) that self-organize to solve system-wide problems of movement, configuration, and coordination. Here, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. Within this model a configuration of particles can be represented as a unique closed self-avoiding walk on the triangular lattice. In this paper we will examine the bias parameter of a Markov chain based algorithm that solves the compression problem under the geometric amoebot model, for particle systems that begin in a connected configuration with no holes. This bias parameter, $\lambda$, determines the behavior of the algorithm. It has been shown that for $\lambda > 2+\sqrt{2}$, with all but exponentially small probability, the algorithm achieves compression. Additionally the same algorithm can be used for expansion for small values of $\lambda$; in particular, for all $0 < \lambda < \sqrt{\tau}$, where $\lim_{n\to\infty} {(p_n)^{1
}}=\tau$. This research will focus on improving approximations on the lower bound of $\tau$. Toward this end we will examine algorithmic enumeration, and series analysis for self-avoiding polygons.
ContributorsLough, Kevin James (Author) / Richa, Andrea (Thesis director) / Fishel, Susanna (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05