Matching Items (37)

Filtering by

Clear all filters

136691-Thumbnail Image.png
Description

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.

ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136520-Thumbnail Image.png
Description

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.

ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135651-Thumbnail Image.png
Description

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure,

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure, including pesticides. Neonicotoids are a popular pesticide that have been used in recent times. In this study we concern ourselves with pesticides and its impact on honey bee colonies. Previous investigations that we draw significant inspiration from include Khoury et Al's \emph{A Quantitative Model of Honey Bee Colony Population Dynamics}, Henry et Al's \emph{A Common Pesticide Decreases Foraging Success and Survival in Honey Bees}, and Brown's \emph{ Mathematical Models of Honey Bee Populations: Rapid Population Decline}. In this project we extend a mathematical model to investigate the impact of pesticides on a honey bee colony, with birth rates and death rates being dependent on pesticides, and we see how these death rates influence the growth of a colony. Our studies have found an equilibrium point that depends on pesticides. Trace amounts of pesticide are detrimental as they not only affect death rates, but birth rates as well.

ContributorsSalinas, Armando (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136340-Thumbnail Image.png
Description

This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way.

This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way. Definitions and notation will be established, leading to explorations of three proofs of the regularity lemma. These are a version of the original proof, a Pythagoras proof utilizing elemental geometry, and a proof utilizing concepts of spectral graph theory. This paper is intended to supplement the proofs with background information about the concepts utilized. Furthermore, it is the hope that this paper will serve as another resource for students and others to begin study of the regularity lemma.

ContributorsByrne, Michael John (Author) / Czygrinow, Andrzej (Thesis director) / Kierstead, Hal (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2015-05
135739-Thumbnail Image.png
Description

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.

ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137483-Thumbnail Image.png
Description

Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory

Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory research of Fewell and Armbruster's "Basketball teams as strategic networks" (2012), which modeled basketball teams as networks and used metrics to characterize team strategy in the NBA's 2010 playoffs. Individual players and outcomes were nodes and passes and actions were the links. This paper used data that was recorded from playoff games of the two 2012 NBA finalists: the Miami Heat and the Oklahoma City Thunder. The same metrics that Fewell and Armbruster used were explained, then calculated using this data. The offensive networks of these two teams during the playoffs were analyzed and interpreted by using other data and qualitative characterization of the teams' strategies; the paper found that the calculated metrics largely matched with our qualitative characterizations of the teams. The validity of the metrics in this paper and Fewell and Armbruster's paper was then discussed, and modeling basketball teams as multiple-order Markov chains rather than as networks was explored.

ContributorsMohanraj, Hariharan (Co-author) / Choi, David (Co-author) / Armbruster, Dieter (Thesis director) / Fewell, Jennifer (Committee member) / Brooks, Daniel (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
137666-Thumbnail Image.png
Description

Dividing the plane in half leaves every border point of one region a border point of both regions. Can we divide up the plane into three or more regions such that any point on the boundary of at least one region is on the border of all the regions? In

Dividing the plane in half leaves every border point of one region a border point of both regions. Can we divide up the plane into three or more regions such that any point on the boundary of at least one region is on the border of all the regions? In fact, it is possible to design a dynamical system for which the basins of attractions have this Wada property. In certain circumstances, both the Hénon map, a simple system, and the forced damped pendulum, a physical model, produce Wada basins.

ContributorsWhitehurst, Ryan David (Author) / Kostelich, Eric (Thesis director) / Jones, Donald (Committee member) / Armbruster, Dieter (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2013-05
137023-Thumbnail Image.png
Description

Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology,

Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology, I devised an interview method, which I structured using multiple phases of pre-planned support. With these interviews, I gathered information about two main aspects about students' thinking: how students think when attempting to reason covariationally and which of the identified ways of thinking are most propitious for the development of an understanding of covariational reasoning. I will discuss how, based on interview data, one of the five identified ways of thinking about covariational reasoning is highly propitious, while the other four are somewhat less propitious.

ContributorsWhitmire, Benjamin James (Author) / Thompson, Patrick (Thesis director) / Musgrave, Stacy (Committee member) / Moore, Kevin C. (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2014-05
135327-Thumbnail Image.png
Description

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.

Created2016-05
136236-Thumbnail Image.png
Description

Lights Out is a puzzle game where the goal is to turn off all the lights on a nxn board starting from a random configuration. In order to find the solution of a configuration, the game is constructed using a matrix basis in the span of the field Z mod

Lights Out is a puzzle game where the goal is to turn off all the lights on a nxn board starting from a random configuration. In order to find the solution of a configuration, the game is constructed using a matrix basis in the span of the field Z mod 2.This the game can be modeled by the system Ap=s which will be the center of the investigation when determining the solvability for any n×n board since A is not always invertable leading to some interesting cases. The goal of this thesis was to construct a model that will allow the player to solve for the pushes to attain the zero-state for an nxn system. Constructing the model gave a procedure that will allow to solve the puzzle game. The procedure presented here first uses a simple clearing technique (valid for any board size) to turn off all the lights except in the last row, which we call the standard-clear. The heart of the technique, is to give a way to use the information about which lights remain lit in the last row to determine which switches in the first row need to be pushed before the standard-clear. This part of the solution algorithm we call the first row adjustment, and it depends heavily on the specific board size n of the problem. Finally, after these first row pushes are made, the standard clear will now turn off all the lights including (seemingly magically) the last row. Thus the solution to the Lights Out puzzle of a given size is reduced to finding a first row adjustment for that size. (Please refer to the actual thesis for the full abstract)

Created2015-05