Matching Items (7)
Filtering by

Clear all filters

135651-Thumbnail Image.png
Description
Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure,

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure, including pesticides. Neonicotoids are a popular pesticide that have been used in recent times. In this study we concern ourselves with pesticides and its impact on honey bee colonies. Previous investigations that we draw significant inspiration from include Khoury et Al's \emph{A Quantitative Model of Honey Bee Colony Population Dynamics}, Henry et Al's \emph{A Common Pesticide Decreases Foraging Success and Survival in Honey Bees}, and Brown's \emph{ Mathematical Models of Honey Bee Populations: Rapid Population Decline}. In this project we extend a mathematical model to investigate the impact of pesticides on a honey bee colony, with birth rates and death rates being dependent on pesticides, and we see how these death rates influence the growth of a colony. Our studies have found an equilibrium point that depends on pesticides. Trace amounts of pesticide are detrimental as they not only affect death rates, but birth rates as well.
ContributorsSalinas, Armando (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
133941-Thumbnail Image.png
Description
A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which

A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which emphasize students' abilities to memorize set problem-solving methods over their capacities to reason abstractly and creatively. These phenomena, in tandem with halting progress in United States education compared to other developed nations, suggest that implementing logic courses into public schools and universities can better prepare students for professional careers and beyond. In particular, logic is essential for mathematics students as they transition from calculation-based courses to theoretical, proof-based classes. Many students find this adjustment difficult, and existing university-level courses which emphasize the technical aspects of symbolic logic do not fully bridge the gap between these two different approaches to mathematics. As a step towards resolving this problem, this project proposes a logic course which integrates historical, technical, and interdisciplinary investigations to present logic as a robust and meaningful subject warranting independent study. This course is designed with mathematics students in mind, with particular stresses on different formulations of deductively valid proof schemes. Additionally, this class can either be taught before existing logic classes in an effort to gradually expose students to logic over an extended period of time, or it can replace current logic courses as a more holistic introduction to the subject. The first section of the course investigates historical developments in studies of argumentation and logic throughout different civilizations; specifically, the works of ancient China, ancient India, ancient Greece, medieval Europe, and modernity are investigated. Along the way, several important themes are highlighted within appropriate historical contexts; these are often presented in an ad hoc way in courses emphasizing technical features of symbolic logic. After the motivations for modern symbolic logic are established, the key technical features of symbolic logic are presented, including: logical connectives, truth tables, logical equivalence, derivations, predicates, and quantifiers. Potential obstacles in students' understandings of these ideas are anticipated, and resolution methods are proposed. Finally, examples of how ideas of symbolic logic are manifested in many modern disciplines are presented. In particular, key concepts in game theory, computer science, biology, grammar, and mathematics are reformulated in the context of symbolic logic. By combining the three perspectives of historical context, technical aspects, and practical applications of symbolic logic, this course will ideally make logic a more meaningful and accessible subject for students.
ContributorsRyba, Austin (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137666-Thumbnail Image.png
Description
Dividing the plane in half leaves every border point of one region a border point of both regions. Can we divide up the plane into three or more regions such that any point on the boundary of at least one region is on the border of all the regions? In

Dividing the plane in half leaves every border point of one region a border point of both regions. Can we divide up the plane into three or more regions such that any point on the boundary of at least one region is on the border of all the regions? In fact, it is possible to design a dynamical system for which the basins of attractions have this Wada property. In certain circumstances, both the Hénon map, a simple system, and the forced damped pendulum, a physical model, produce Wada basins.
ContributorsWhitehurst, Ryan David (Author) / Kostelich, Eric (Thesis director) / Jones, Donald (Committee member) / Armbruster, Dieter (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2013-05
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05