Matching Items (10)
133887-Thumbnail Image.png
Description
This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.
ContributorsKing, Michael (Author) / Ren, Yi (Thesis director) / Spanias, Andreas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
136330-Thumbnail Image.png
Description
We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We examine this process on complete graphs, bipartite graphs, and the integers, answering questions about the relationship between communication, defection rates and the division of labor. Assuming the division of labor is ideal when exactly half of the colony is performing each task, we nd that on some bipartite graphs and the integers it can eventually be made arbitrarily close to optimal if defection rates are sufficiently small. On complete graphs the fraction of individuals performing each task is also closest to one half when there is no defection, but is bounded by a constant dependent on the relative costs of each task.
ContributorsArcuri, Alesandro Antonio (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Fewell, Jennifer (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137559-Thumbnail Image.png
Description
Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a

Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a binomial random variable. This analysis revisits Galams models from the point of view of the hypergeometric random variable by assuming the initial number of individuals in favor of an opinion is a fixed deterministic number. This assumption is more realistic, especially when analyzing small populations. Evolution of the models is based on majority rules, with a bias introduced when there is a tie. For the hier- archical voting system model, in order to derive the probability that opinion +1 would win, the analysis was done by reversing time and assuming that an individual in favor of opinion +1 wins. Then, working backwards we counted the number of configurations at the next lowest level that could induce each possible configuration at the level above, and continued this process until reaching the bottom level, i.e., the initial population. Using this method, we were able to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion, for any group size greater than or equal to three. For the public debate model, we counted the total number of individuals in favor of opinion +1 at each time step and used this variable to define a random walk. Then, we used first-step analysis to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion for group sizes of three. The spatial public debate model evolves based on the proportional rule. For the spatial model, the most natural graphical representation to construct the process results in a model that is not mathematically tractable. Thus, we defined a different graphical representation that is mathematically equivalent to the first graphical representation, but in this model it is possible to define a dual process that is mathematically tractable. Using this graphical representation we prove clustering in 1D and 2D and coexistence in higher dimensions following the same approach as for the voter model interacting particle system.
ContributorsTaylor, Nicole Robyn (Co-author) / Lanchier, Nicolas (Co-author, Thesis director) / Smith, Hal (Committee member) / Hurlbert, Glenn (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135086-Thumbnail Image.png
Description
We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the

We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the winner receives no new points, while in the winner-takes-all model the points that the loser had are added to the winner's total. Recurrence properties are assessed for both models: the knockout model is recurrent except for the all-zero state, and the winner-takes-all model is transient, but retains some aspect of recurrence. In addition, we study the population-level allocation of points; for the winner-takes-all model we show explicitly that the proportion of individuals having any number j of points, j=0,1,... approaches a stationary distribution that can be computed recursively. Graphs of numerical simulations are included to exemplify the results proved.
ContributorsVanKirk, Maxwell Joshua (Author) / Lanchier, Nicolas (Thesis director) / Foxall, Eric (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2016-12
137637-Thumbnail Image.png
Description
The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many

The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many mechanisms to approach this issue ; the Axelrod Model is one of them.
ContributorsYu, Yili (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Brooks, Dan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor)
Created2013-05
132169-Thumbnail Image.png
Description
In materials science, developing GeSn alloys is major current research interest concerning the production of efficient Group-IV photonics. These alloys are particularly interesting because the development of next-generation semiconductors for ultrafast (terahertz) optoelectronic communication devices could be accomplished through integrating these novel alloys with industry-standard silicon technology. Unfortunately, incorporating a

In materials science, developing GeSn alloys is major current research interest concerning the production of efficient Group-IV photonics. These alloys are particularly interesting because the development of next-generation semiconductors for ultrafast (terahertz) optoelectronic communication devices could be accomplished through integrating these novel alloys with industry-standard silicon technology. Unfortunately, incorporating a maximal amount of Sn into a Ge lattice has been difficult to achieve experimentally. At ambient conditions, pure Ge and Sn adopt cubic (α) and tetragonal (β) structures, respectively, however, to date the relative stability and structure of α and β phase GeSn alloys versus percent composition Sn has not been thoroughly studied. In this research project, computational tools were used to perform state-of-the-art predictive quantum simulations to study the structural, bonding and energetic trends in GeSn alloys in detail over a range of experimentally accessible compositions. Since recent X-Ray and vibrational studies have raised some controversy about the nanostructure of GeSn alloys, the investigation was conducted with ordered, random and clustered alloy models.
By means of optimized geometry analysis, pure Ge and Sn were found to adopt the alpha and beta structures, respectively, as observed experimentally. For all theoretical alloys, the corresponding αphase structure was found to have the lowest energy, for Sn percent compositions up to 90%. However at 50% Sn, the correspondingβ alloy energies are predicted to be only ~70 meV higher. The formation energy of α-phase alloys was found to be positive for all compositions, whereas only two beta formation energies were negative. Bond length distributions were analyzed and dependence on Sn incorporation was found, perhaps surprisingly, not to be directly correlated with cell volume. It is anticipated that the data collected in this project may help to elucidate observed complex vibrational properties in these systems.
ContributorsLiberman-Martin, Zoe Elise (Author) / Chizmeshya, Andrew (Thesis director) / Sayres, Scott (Committee member) / Wolf, George (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132766-Thumbnail Image.png
Description
This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters assign scores to candidates based on both policy and non-policy considerations, then voters then decide which candidate they support based on which has a higher candidate score. The traditional assumption that most of the population will vote is replaced by a function describing the probability of voting based on candidate scores assigned by individual voters. If the voter's likelihood of voting is not certain, but rather modelled by a sigmoid curve, it has radical implications on party decisions and actions taken during an election cycle. The model also includes a significant interaction term between the candidate scores and the differential between the scores which enhances the Downsian model. The thesis is proposed in a similar manner to Downs' original presentation, including several allegorical and hypothetical examples of the model in action. The results of the model reveal that single issue voters can have a significant impact on election outcomes, and that the weight of non-policy considerations is high enough that political parties would spend large sums of money on campaigning. Future research will include creating an experiment to verify the interaction terms, as well as adjusting the model for individual costs so that more empirical analysis may be completed.
ContributorsCoulter, Jarod Maxwell (Author) / Foster, William (Thesis director) / Goegan, Brian (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.
ContributorsBriggs, Nathaniel (Author) / Chattopadhyay, Aditi (Thesis director) / Papandreou-Suppappola, Antonia (Committee member) / Skinner, Travis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05