Matching Items (56)
Filtering by

Clear all filters

135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135327-Thumbnail Image.png
Description
A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.
Created2016-05
135651-Thumbnail Image.png
Description
Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure,

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure, including pesticides. Neonicotoids are a popular pesticide that have been used in recent times. In this study we concern ourselves with pesticides and its impact on honey bee colonies. Previous investigations that we draw significant inspiration from include Khoury et Al's \emph{A Quantitative Model of Honey Bee Colony Population Dynamics}, Henry et Al's \emph{A Common Pesticide Decreases Foraging Success and Survival in Honey Bees}, and Brown's \emph{ Mathematical Models of Honey Bee Populations: Rapid Population Decline}. In this project we extend a mathematical model to investigate the impact of pesticides on a honey bee colony, with birth rates and death rates being dependent on pesticides, and we see how these death rates influence the growth of a colony. Our studies have found an equilibrium point that depends on pesticides. Trace amounts of pesticide are detrimental as they not only affect death rates, but birth rates as well.
ContributorsSalinas, Armando (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136625-Thumbnail Image.png
Description
A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to

A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to the material similar to that of which is presented in class at ASU. The guide is available to students and professors in the new Actuarial Science degree program offered by ASU. There are twelve chapters, including financial calculator tips, detailed notes, examples, and practice exercises. Included at the end of the guide is a list of referenced material.
ContributorsDougher, Caroline Marie (Author) / Milovanovic, Jelena (Thesis director) / Boggess, May (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136526-Thumbnail Image.png
Description
The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections

The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections examine reasons and methods for the oboe's invention, as well as possible causes of its migration from its starting place in France to other European countries, as well as many other places around the world. I conclude that the oboe was invented to suit the needs of composers in the court of Louis XIV, and that it was brought to other countries by French performers who left France for many reasons, including to escape from the authority of composer Jean-Baptiste Lully and in some cases to promote French culture in other countries.
ContributorsCook, Mary Katherine (Author) / Schuring, Martin (Thesis director) / Micklich, Albie (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Music (Contributor)
Created2015-05
136330-Thumbnail Image.png
Description
We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We examine this process on complete graphs, bipartite graphs, and the integers, answering questions about the relationship between communication, defection rates and the division of labor. Assuming the division of labor is ideal when exactly half of the colony is performing each task, we nd that on some bipartite graphs and the integers it can eventually be made arbitrarily close to optimal if defection rates are sufficiently small. On complete graphs the fraction of individuals performing each task is also closest to one half when there is no defection, but is bounded by a constant dependent on the relative costs of each task.
ContributorsArcuri, Alesandro Antonio (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Fewell, Jennifer (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136340-Thumbnail Image.png
Description
This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way.

This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way. Definitions and notation will be established, leading to explorations of three proofs of the regularity lemma. These are a version of the original proof, a Pythagoras proof utilizing elemental geometry, and a proof utilizing concepts of spectral graph theory. This paper is intended to supplement the proofs with background information about the concepts utilized. Furthermore, it is the hope that this paper will serve as another resource for students and others to begin study of the regularity lemma.
ContributorsByrne, Michael John (Author) / Czygrinow, Andrzej (Thesis director) / Kierstead, Hal (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2015-05