Matching Items (50)
Filtering by

Clear all filters

136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136271-Thumbnail Image.png
Description
The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak

The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak Fusion Test Reactor), and NSTX (National Spherical Torus Experiment) devices possible through their use. This development has facilitated the investigation of NNs for predicting heat transport profiles in JET, TFTR, and NSTX, and has promoted additional investigations to discover how else NNs may be of use to scientists at PPPL. In applying NNs to the aforementioned devices for predicting heat transport, the primary goal of this endeavor is to reproduce the success shown in Meneghini et al. in using NNs for heat transport prediction in DIII-D. Being able to reproduce the results from is important because this in turn would provide scientists at PPPL with a quick and efficient toolset for reliably predicting heat transport profiles much faster than any existing computational methods allow; the progress towards this goal is outlined in this report, and potential additional applications of the NN framework are presented.
ContributorsLuna, Christopher Joseph (Author) / Tang, Wenbo (Thesis director) / Treacy, Michael (Committee member) / Orso, Meneghini (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
136406-Thumbnail Image.png
Description
In this paper, I analyze representations of nature in popular film, using the feminist / deconstructionist concept of a dualism to structure my critique. Using Val Plumwood’s analysis of the logical structure of dualism and the 5 ‘features of a dualism’ that she identifies, I critique 5 popular movies –

In this paper, I analyze representations of nature in popular film, using the feminist / deconstructionist concept of a dualism to structure my critique. Using Val Plumwood’s analysis of the logical structure of dualism and the 5 ‘features of a dualism’ that she identifies, I critique 5 popular movies – Star Wars, Lord of the Rings, Brave, Grizzly Man, and Planet Earth – by locating within each of them one of the 5 features and explaining how the movie functions to reinforce the Nature/Culture dualism . By showing how the Nature/Culture dualism shapes and is shaped by popular cinema, I show how “Nature” is a social construct, created as part of this very dualism, and reified through popular culture. I conclude with the introduction of a number of ‘subversive’ pieces of visual art that undermine and actively deconstruct the Nature/Culture dualism and show to the viewer a more honest presentation of the non-human world.
ContributorsBarton, Christopher Joseph (Author) / Broglio, Ron (Thesis director) / Minteer, Ben (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Geographical Sciences and Urban Planning (Contributor)
Created2015-05
136409-Thumbnail Image.png
Description
Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public.

Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public. With this motivation, this paper develops a model for trends leveraging previous work with k-nearest-neighbors and dynamic time warping. The development of this model provides insight into the length and features of trends, and successfully generalizes to identify 74.3% of trends in the time period of interest. The model developed in this work provides understanding into why par- ticular words trend on Twitter.
ContributorsMarshall, Grant A (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
133177-Thumbnail Image.png
Description
From 2007 to 2017, the state of California experienced two major droughts that required significant governmental action to decrease urban water demand. The purpose of this project is to isolate and explore the effects of these policy changes on water use during and after these droughts, and to see how

From 2007 to 2017, the state of California experienced two major droughts that required significant governmental action to decrease urban water demand. The purpose of this project is to isolate and explore the effects of these policy changes on water use during and after these droughts, and to see how these policies interact with hydroclimatic variability. As explanatory variables in multiple linear regression (MLR) models, water use policies were found to be significant at both the zip code and city levels. Policies that specifically target behavioral changes were significant mathematical drivers of water use in city-level models. Policy data was aggregated into a timeline and coded based on categories including user type, whether the policy was voluntary or mandatory, the targeted water use type, and whether the change in question concerns active or passive conservation. The analyzed policies include but are not limited to state drought declarations, regulatory municipal ordinances, and incentive programs for household appliances. Spatial averages of available hydroclimatic data have been computed and validated using inverse distance weighting methods. The data was aggregated at the zip code level to be comparable to the available water use data for use in MLR models. Factors already known to affect water use, such as temperature, precipitation, income, and water stress, were brought into the MLR models as explanatory variables. After controlling for these factors, the timeline policies were brought into the model as coded variables to test their effect on water demand during the years 2000-2017. Clearly identifying which policy traits are effective will inform future policymaking in cities aiming to conserve water. The findings suggest that drought-related policies impact per capita urban water use. The results of the city level MLR models indicate that implementation of mandatory policies that target water use behaviors effectively reduce water use. Temperature, income, unemployment, and the WaSSI were also observed to be mathematical drivers of water use. Interaction effects between policies and the WaSSI were statistically significant at both model scales.
ContributorsHjelmstad, Annika Margaret (Author) / Garcia, Margaret (Thesis director) / Larson, Kelli (Committee member) / Civil, Environmental and Sustainable Eng Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
137196-Thumbnail Image.png
Description
As society's energy crisis continues to become more imminent many industries and niches are seeking a new, sustainable and renewable source of electricity production. Similar to solar, wind and tidal energy, kinetic energy has the potential to generate electricity as an extremely renewable source of energy generation. While stationary bicycles

As society's energy crisis continues to become more imminent many industries and niches are seeking a new, sustainable and renewable source of electricity production. Similar to solar, wind and tidal energy, kinetic energy has the potential to generate electricity as an extremely renewable source of energy generation. While stationary bicycles can generate small amounts of electricity, the idea behind this project was to expand energy generation into the more common weight lifting side of exercising. The method for solving this problem was to find the average amount of power generated per user on a Smith machine and determine how much power was available from an accompanying energy generator. The generator consists of three phases: a copper coil and magnet generator, a full wave bridge rectifying circuit and a rheostat. These three phases working together formed a fully functioning controllable generator. The resulting issue with the kinetic energy generator was that the system was too inefficient to serve as a viable system for electricity generation. The electrical production of the generator only saved about 2 cents per year based on current Arizona electricity rates. In the end it was determined that the project was not a sustainable energy generation system and did not warrant further experimentation.
ContributorsO'Halloran, Ryan James (Author) / Middleton, James (Thesis director) / Hinrichs, Richard (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / The Design School (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05