Matching Items (28)
Filtering by

Clear all filters

135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135380-Thumbnail Image.png
Description
Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide

Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide a rigorous, collaborative, and relevant academic program emphasizing an innovative, problem-based curriculum that develops literacy in the sciences, mathematics, and the arts, thus cultivating critical thinkers, creative problem-solvers, and compassionate citizens, who are able to thrive in our increasingly complex and technological communities." Computational thinking is an important part in developing a future problem solver Bioscience High School is looking to produce. Bioscience High School is unique in the fact that every student has a computer available for him or her to use. Therefore, it makes complete sense for the school to add computer science to their curriculum because one of the school's goals is to be able to utilize their resources to their full potential. However, the school's attempt at computer science integration falls short due to the lack of expertise amongst the math and science teachers. The lack of training and support has postponed the development of the program and they are desperately in need of someone with expertise in the field to help reboot the program. As a result, I've decided to create a course that is focused on teaching students the concepts of computational thinking and its application through Scratch and Arduino programming.
ContributorsLiu, Deming (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
133941-Thumbnail Image.png
Description
A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which

A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which emphasize students' abilities to memorize set problem-solving methods over their capacities to reason abstractly and creatively. These phenomena, in tandem with halting progress in United States education compared to other developed nations, suggest that implementing logic courses into public schools and universities can better prepare students for professional careers and beyond. In particular, logic is essential for mathematics students as they transition from calculation-based courses to theoretical, proof-based classes. Many students find this adjustment difficult, and existing university-level courses which emphasize the technical aspects of symbolic logic do not fully bridge the gap between these two different approaches to mathematics. As a step towards resolving this problem, this project proposes a logic course which integrates historical, technical, and interdisciplinary investigations to present logic as a robust and meaningful subject warranting independent study. This course is designed with mathematics students in mind, with particular stresses on different formulations of deductively valid proof schemes. Additionally, this class can either be taught before existing logic classes in an effort to gradually expose students to logic over an extended period of time, or it can replace current logic courses as a more holistic introduction to the subject. The first section of the course investigates historical developments in studies of argumentation and logic throughout different civilizations; specifically, the works of ancient China, ancient India, ancient Greece, medieval Europe, and modernity are investigated. Along the way, several important themes are highlighted within appropriate historical contexts; these are often presented in an ad hoc way in courses emphasizing technical features of symbolic logic. After the motivations for modern symbolic logic are established, the key technical features of symbolic logic are presented, including: logical connectives, truth tables, logical equivalence, derivations, predicates, and quantifiers. Potential obstacles in students' understandings of these ideas are anticipated, and resolution methods are proposed. Finally, examples of how ideas of symbolic logic are manifested in many modern disciplines are presented. In particular, key concepts in game theory, computer science, biology, grammar, and mathematics are reformulated in the context of symbolic logic. By combining the three perspectives of historical context, technical aspects, and practical applications of symbolic logic, this course will ideally make logic a more meaningful and accessible subject for students.
ContributorsRyba, Austin (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132834-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However
while this does cause ETF deviations to be generally lower than their mutual fund counterparts,
as our paper explores this process does not eliminate these deviations completely. This article
builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of
premiums (discounts) of ETFs from their fair market value. And looks to see if these premia
have changed in the last 10 years. Our paper then diverges from the original and takes a deeper
look into the standard deviations of these premia specifically.
Our findings show that over 70% of an ETFs standard deviation of premia can be
explained through a linear combination consisting of two variables: a categorical (Domestic[US],
Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds
that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market
indicators such as the economic freedom index and investment freedom index are insignificant
predictors of an ETFs standard deviation of premia. These findings differ somewhat from
existing literature which indicate that these factors should have a significant impact on the
predictive ability of an ETFs standard deviation of premia.
ContributorsHenning, Thomas Louis (Co-author) / Zhang, Jingbo (Co-author) / Simonson, Mark (Thesis director) / Wendell, Licon (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133570-Thumbnail Image.png
Description
In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD)

In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD) was coined to explain the sudden and sharp decline of the honey bee colonies that beekeepers were experiencing. Colony collapses have been rising higher compared to expected averages over the years, and during the winter season losses are even more severe than what is normally acceptable. There are some possible explanations pointing towards meteorological variables, diseases, and even pesticide usage. Despite the cause of CCD being unknown, thousands of beekeepers have reported their losses, and even numbers of infected colonies and colonies under certain stressors in the most recent years. Using the data that was reported to The United States Department of Agriculture (USDA), as well as weather data collected by The National Centers for Environmental Information (NOAA) and the National Centers for Environmental Information (NCEI), regression analysis was used to investigate honey bee colonies to find relationships between stressors in honey bee colonies and meteorological variables, and colony collapses during the winter months. The regression analysis focused on the winter season, or quarter 4 of the year, which includes the months of October, November, and December. In the model, the response variables was the percentage of colonies lost in quarter 4. Through the model, it was concluded that certain weather thresholds and the percentage increase of colonies under certain stressors were related to colony loss.
ContributorsVasquez, Henry Antony (Author) / Zheng, Yi (Thesis director) / Saffell, Erinanne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137409-Thumbnail Image.png
Description
Electromyography (EMG) and Electroencephalography (EEG) are techniques used to detect electrical activity produced by the human body. EMG detects electrical activity in the skeletal muscles, while EEG detects electrical activity from the scalp. The purpose of this study is to capture different types of EMG and EEG signals and to

Electromyography (EMG) and Electroencephalography (EEG) are techniques used to detect electrical activity produced by the human body. EMG detects electrical activity in the skeletal muscles, while EEG detects electrical activity from the scalp. The purpose of this study is to capture different types of EMG and EEG signals and to determine if the signals can be distinguished between each other and processed into output signals to trigger events in prosthetics. Results from the study suggest that the PSD estimates can be used to compare signals that have significant differences such as the wrist, scalp, and fingers, but it cannot fully distinguish between signals that are closely related, such as two different fingers. The signals that were identified were able to be translated into the physical output simulated on the Arduino circuit.
ContributorsJanis, William Edward (Author) / LaBelle, Jeffrey (Thesis director) / Santello, Marco (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-12
137023-Thumbnail Image.png
Description
Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology,

Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology, I devised an interview method, which I structured using multiple phases of pre-planned support. With these interviews, I gathered information about two main aspects about students' thinking: how students think when attempting to reason covariationally and which of the identified ways of thinking are most propitious for the development of an understanding of covariational reasoning. I will discuss how, based on interview data, one of the five identified ways of thinking about covariational reasoning is highly propitious, while the other four are somewhat less propitious.
ContributorsWhitmire, Benjamin James (Author) / Thompson, Patrick (Thesis director) / Musgrave, Stacy (Committee member) / Moore, Kevin C. (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2014-05
134415-Thumbnail Image.png
Description
This paper will begin by initially discussing the potential uses and challenges of efficient and accurate traffic forecasting. The data we used includes traffic volume from seven locations on a busy Athens street in April and May of 2000. This data was used as part of a traffic forecasting competition.

This paper will begin by initially discussing the potential uses and challenges of efficient and accurate traffic forecasting. The data we used includes traffic volume from seven locations on a busy Athens street in April and May of 2000. This data was used as part of a traffic forecasting competition. Our initial observation, was that due to the volatility and oscillating nature of daily traffic volume, simple linear regression models will not perform well in predicting the time-series data. For this we present the Harmonic Time Series model. Such model (assuming all predictors are significant) will include a sinusoidal term for each time index within a period of data. Our assumption is that traffic volumes have a period of one week (which is evidenced by the graphs reproduced in our paper). This leads to a model that has 6,720 sine and cosine terms. This is clearly too many coefficients, so in an effort to avoid over-fitting and having an efficient model, we apply the sub-setting algorithm known as Adaptive Lass.
ContributorsMora, Juan (Author) / Kamarianakis, Ioannis (Thesis director) / Yu, Wanchunzi (Committee member) / W. P. Carey School of Business (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
133656-Thumbnail Image.png
Description
Grade inflation in modern universities across the United States has been documented since the 1960's and shows no signs of disappearing soon. Responses to this trend have ranged from mild worry to excessive panic. However, is the concern justified? How significant are the effects, if any, of grade inflation on

Grade inflation in modern universities across the United States has been documented since the 1960's and shows no signs of disappearing soon. Responses to this trend have ranged from mild worry to excessive panic. However, is the concern justified? How significant are the effects, if any, of grade inflation on students? Specifically, does grade inflation on the aggregate level have any effect on how much an individual will learn from their courses? This is precisely the question my project hoped to address. Grade inflation in U.S. colleges has played a central role in student-teacher relationships and the way university classrooms run. Through teacher interviews, student surveys, and a literature review, this paper investigates the nuanced effects grade inflation is having on student motivation and learning. The hypothesis is that the easier it is for a student to obtain their desired grade, the less they will end up engaging in and learning from a given course. Major findings of the literature include: grade inflation has robbed grades of their signaling power, grade inflation has helped create students are too grade-oriented, student evaluations of teaching have prompted higher grades, higher expectations for high grades induce greater study times, and open dialogue can help reverse grade inflation trends. The student surveys and faculty interviews agreed with much of the literature and found that professors believe grade inflation is real but do not believe its effects are significant, students admit to being primarily motivated by grades, and students find grades critically important to their future. The paper concludes that grade inflation is not as detrimental to student outcomes as ardent critics argue and offers practical ways to address it.
ContributorsGregory, Austin Scott (Author) / Ruediger, Stefan (Thesis director) / Goegan, Brian (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05