Matching Items (65)
Filtering by

Clear all filters

135327-Thumbnail Image.png
Description
A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.
Created2016-05
135651-Thumbnail Image.png
Description
Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure,

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure, including pesticides. Neonicotoids are a popular pesticide that have been used in recent times. In this study we concern ourselves with pesticides and its impact on honey bee colonies. Previous investigations that we draw significant inspiration from include Khoury et Al's \emph{A Quantitative Model of Honey Bee Colony Population Dynamics}, Henry et Al's \emph{A Common Pesticide Decreases Foraging Success and Survival in Honey Bees}, and Brown's \emph{ Mathematical Models of Honey Bee Populations: Rapid Population Decline}. In this project we extend a mathematical model to investigate the impact of pesticides on a honey bee colony, with birth rates and death rates being dependent on pesticides, and we see how these death rates influence the growth of a colony. Our studies have found an equilibrium point that depends on pesticides. Trace amounts of pesticide are detrimental as they not only affect death rates, but birth rates as well.
ContributorsSalinas, Armando (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136625-Thumbnail Image.png
Description
A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to

A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to the material similar to that of which is presented in class at ASU. The guide is available to students and professors in the new Actuarial Science degree program offered by ASU. There are twelve chapters, including financial calculator tips, detailed notes, examples, and practice exercises. Included at the end of the guide is a list of referenced material.
ContributorsDougher, Caroline Marie (Author) / Milovanovic, Jelena (Thesis director) / Boggess, May (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136723-Thumbnail Image.png
Description
This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in

This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in the Victorian age, sensation fiction primarily involves experiences of pain on the page that excite the reader's pleasure. As such, sensationalism as a whole can be seen as a conformist product, one which mirrors the effects of all commodities on the market, rather than as a rebellious one. Indeed, contrary to modern and contemporary critics' assumptions, sensation fiction may not be as scandalous as it seems.
ContributorsFischer, Brett Andrew (Author) / Bivona, Daniel (Thesis director) / Looser, Devoney (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / School of Politics and Global Studies (Contributor) / Department of English (Contributor)
Created2014-12
136760-Thumbnail Image.png
Description
Through collection of survey data on the characteristics of college debaters, disparities in participation and success for women and racial and ethnic minorities are measured. This study then uses econometric tools to assess whether there is an in-group judging bias in college debate that systematically disadvantages female and minority participants.

Through collection of survey data on the characteristics of college debaters, disparities in participation and success for women and racial and ethnic minorities are measured. This study then uses econometric tools to assess whether there is an in-group judging bias in college debate that systematically disadvantages female and minority participants. Debate is used as a testing ground for competing economic theories of taste-based and statistical discrimination, applied to a higher education context. The study finds persistent disparities in participation and success for female participants. Judges are more likely to vote for debaters who share their gender. There is also a significant disparity in the participation of racial and ethnic minority debaters and judges, as well as female judges.
ContributorsVered, Michelle Nicole (Author) / Silverman, Daniel (Thesis director) / Symonds, Adam (Committee member) / Dillon, Eleanor (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Politics and Global Studies (Contributor)
Created2014-12
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136340-Thumbnail Image.png
Description
This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way.

This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way. Definitions and notation will be established, leading to explorations of three proofs of the regularity lemma. These are a version of the original proof, a Pythagoras proof utilizing elemental geometry, and a proof utilizing concepts of spectral graph theory. This paper is intended to supplement the proofs with background information about the concepts utilized. Furthermore, it is the hope that this paper will serve as another resource for students and others to begin study of the regularity lemma.
ContributorsByrne, Michael John (Author) / Czygrinow, Andrzej (Thesis director) / Kierstead, Hal (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2015-05
136381-Thumbnail Image.png
Description
According to the Tax Policy Center, a joint project of the Brookings Institution and Urban Institute, the Earned Income Tax Credit (EITC) will provide 26 million households with 60 billion dollars of reduced taxes and refunds in 2015 \u2014 resources that serve to lift millions of families above the federal

According to the Tax Policy Center, a joint project of the Brookings Institution and Urban Institute, the Earned Income Tax Credit (EITC) will provide 26 million households with 60 billion dollars of reduced taxes and refunds in 2015 \u2014 resources that serve to lift millions of families above the federal poverty line. Responding to the popularity of EITC programs and recent discussion of its expansion for childless adults, I select three comparative case studies of state-level EITC reform from 2005 to 2013. Each state represents a different kind of policy reform: the creation of a supplemental credit in Connecticut, credit reduction in New Jersey, and finally credit expansion for childless adults in Maryland. For each case study, I use Current Population Survey panel data from the March Supplement to complete a differences-in-differences (DD) analysis of EITC policy changes. Specifically, I analyze effects of policy reform on total earned income, employment and usual hours worked. For comparison groups, I construct unique counterfactual populations of northeastern U.S. states, using people of color with less than a college degree as my treatment group for their increased sensitivity to EITC policy reform. I find no statistically significant effects of policy creation in Connecticut, significant decreases in employment and hours worked in New Jersey, and finally, significant increases in earnings and hours worked in Maryland. My work supports the findings of other empirical work, suggesting that awareness of new supplemental EITC programs is critical to their effectiveness while demonstrating that these types of programs can affect the labor supply and outcomes of eligible groups.
ContributorsRichard, Katherine Rose (Author) / Dillon, Eleanor Wiske (Thesis director) / Silverman, Daniel (Committee member) / Herbst, Chris (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor)
Created2015-05
136236-Thumbnail Image.png
Description
Lights Out is a puzzle game where the goal is to turn off all the lights on a nxn board starting from a random configuration. In order to find the solution of a configuration, the game is constructed using a matrix basis in the span of the field Z mod

Lights Out is a puzzle game where the goal is to turn off all the lights on a nxn board starting from a random configuration. In order to find the solution of a configuration, the game is constructed using a matrix basis in the span of the field Z mod 2.This the game can be modeled by the system Ap=s which will be the center of the investigation when determining the solvability for any n×n board since A is not always invertable leading to some interesting cases. The goal of this thesis was to construct a model that will allow the player to solve for the pushes to attain the zero-state for an nxn system. Constructing the model gave a procedure that will allow to solve the puzzle game. The procedure presented here first uses a simple clearing technique (valid for any board size) to turn off all the lights except in the last row, which we call the standard-clear. The heart of the technique, is to give a way to use the information about which lights remain lit in the last row to determine which switches in the first row need to be pushed before the standard-clear. This part of the solution algorithm we call the first row adjustment, and it depends heavily on the specific board size n of the problem. Finally, after these first row pushes are made, the standard clear will now turn off all the lights including (seemingly magically) the last row. Thus the solution to the Lights Out puzzle of a given size is reduced to finding a first row adjustment for that size. (Please refer to the actual thesis for the full abstract)
Created2015-05