Matching Items (51)
Filtering by

Clear all filters

135433-Thumbnail Image.png
Description
For our collaborative thesis we explored the US electric utility market and how the Internet of Things technology movement could capture a possible advancement of the current existing grid. Our objective of this project was to successfully understand the market trends in the utility space and identify where a semiconductor

For our collaborative thesis we explored the US electric utility market and how the Internet of Things technology movement could capture a possible advancement of the current existing grid. Our objective of this project was to successfully understand the market trends in the utility space and identify where a semiconductor manufacturing company, with a focus on IoT technology, could penetrate the market using their products. The methodology used for our research was to conduct industry interviews to formulate common trends in the utility and industrial hardware manufacturer industries. From there, we composed various strategies that The Company should explore. These strategies were backed up using qualitative reasoning and forecasted discounted cash flow and net present value analysis. We confirmed that The Company should use specific silicon microprocessors and microcontrollers that pertained to each of the four devices analytics demand. Along with a silicon strategy, our group believes that there is a strong argument for a data analytics software package by forming strategic partnerships in this space.
ContributorsLlazani, Loris (Co-author) / Ruland, Matthew (Co-author) / Medl, Jordan (Co-author) / Crowe, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / Department of Economics (Contributor) / Department of Finance (Contributor) / Department of Supply Chain Management (Contributor) / Department of Information Systems (Contributor) / Hugh Downs School of Human Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136271-Thumbnail Image.png
Description
The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak

The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak Fusion Test Reactor), and NSTX (National Spherical Torus Experiment) devices possible through their use. This development has facilitated the investigation of NNs for predicting heat transport profiles in JET, TFTR, and NSTX, and has promoted additional investigations to discover how else NNs may be of use to scientists at PPPL. In applying NNs to the aforementioned devices for predicting heat transport, the primary goal of this endeavor is to reproduce the success shown in Meneghini et al. in using NNs for heat transport prediction in DIII-D. Being able to reproduce the results from is important because this in turn would provide scientists at PPPL with a quick and efficient toolset for reliably predicting heat transport profiles much faster than any existing computational methods allow; the progress towards this goal is outlined in this report, and potential additional applications of the NN framework are presented.
ContributorsLuna, Christopher Joseph (Author) / Tang, Wenbo (Thesis director) / Treacy, Michael (Committee member) / Orso, Meneghini (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
136409-Thumbnail Image.png
Description
Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public.

Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public. With this motivation, this paper develops a model for trends leveraging previous work with k-nearest-neighbors and dynamic time warping. The development of this model provides insight into the length and features of trends, and successfully generalizes to identify 74.3% of trends in the time period of interest. The model developed in this work provides understanding into why par- ticular words trend on Twitter.
ContributorsMarshall, Grant A (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
133143-Thumbnail Image.png
Description
The prevalence of bots, or automated accounts, on social media is a well-known problem. Some of the ways bots harm social media users include, but are not limited to, spreading misinformation, influencing topic discussions, and dispersing harmful links. Bots have affected the field of disaster relief on social media as

The prevalence of bots, or automated accounts, on social media is a well-known problem. Some of the ways bots harm social media users include, but are not limited to, spreading misinformation, influencing topic discussions, and dispersing harmful links. Bots have affected the field of disaster relief on social media as well. These bots cause problems such as preventing rescuers from determining credible calls for help, spreading fake news and other malicious content, and generating large amounts of content which burdens rescuers attempting to provide aid in the aftermath of disasters. To address these problems, this research seeks to detect bots participating in disaster event related discussions and increase the recall, or number of bots removed from the network, of Twitter bot detection methods. The removal of these bots will also prevent human users from accidentally interacting with these bot accounts and being manipulated by them. To accomplish this goal, an existing bot detection classification algorithm known as BoostOR was employed. BoostOR is an ensemble learning algorithm originally modeled to increase bot detection recall in a dataset and it has the possibility to solve the social media bot dilemma where there may be several different types of bots in the data. BoostOR was first introduced as an adjustment to existing ensemble classifiers to increase recall. However, after testing the BoostOR algorithm on unobserved datasets, results showed that BoostOR does not perform as expected. This study attempts to improve the BoostOR algorithm by comparing it with a baseline classification algorithm, AdaBoost, and then discussing the intentional differences between the two. Additionally, this study presents the main factors which contribute to the shortcomings of the BoostOR algorithm and proposes a solution to improve it. These recommendations should ensure that the BoostOR algorithm can be applied to new and unobserved datasets in the future.
ContributorsDavis, Matthew William (Author) / Liu, Huan (Thesis director) / Nazer, Tahora H. (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
132857-Thumbnail Image.png
Description
Predictive analytics have been used in a wide variety of settings, including healthcare,
sports, banking, and other disciplines. We use predictive analytics and modeling to
determine the impact of certain factors that increase the probability of a successful
fourth down conversion in the Power 5 conferences. The logistic regression models

Predictive analytics have been used in a wide variety of settings, including healthcare,
sports, banking, and other disciplines. We use predictive analytics and modeling to
determine the impact of certain factors that increase the probability of a successful
fourth down conversion in the Power 5 conferences. The logistic regression models
predict the likelihood of going for fourth down with a 64% or more probability based on
2015-17 data obtained from ESPN’s college football API. Offense type though important
but non-measurable was incorporated as a random effect. We found that distance to go,
play type, field position, and week of the season were key leading covariates in
predictability. On average, our model performed as much as 14% better than coaches
in 2018.
ContributorsBlinkoff, Joshua Ian (Co-author) / Voeller, Michael (Co-author) / Wilson, Jeffrey (Thesis director) / Graham, Scottie (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Department of Information Systems (Contributor) / Department of Management and Entrepreneurship (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132858-Thumbnail Image.png
Description
Predictive analytics have been used in a wide variety of settings, including healthcare, sports, banking, and other disciplines. We use predictive analytics and modeling to determine the impact of certain factors that increase the probability of a successful fourth down conversion in the Power 5 conferences. The logistic regression models

Predictive analytics have been used in a wide variety of settings, including healthcare, sports, banking, and other disciplines. We use predictive analytics and modeling to determine the impact of certain factors that increase the probability of a successful fourth down conversion in the Power 5 conferences. The logistic regression models predict the likelihood of going for fourth down with a 64% or more probability based on 2015-17 data obtained from ESPN’s college football API. Offense type though important but non-measurable was incorporated as a random effect. We found that distance to go, play type, field position, and week of the season were key leading covariates in predictability. On average, our model performed as much as 14% better than coaches in 2018.
ContributorsVoeller, Michael Jeffrey (Co-author) / Blinkoff, Josh (Co-author) / Wilson, Jeffrey (Thesis director) / Graham, Scottie (Committee member) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132867-Thumbnail Image.png
Description
The objective of this project was the creation of a web app for undergraduate CIS/BDA students which allows them to search for jobs based on criteria that are not always directly available with the average job search engine. This includes technical skills, soft skills, location and industry. This

The objective of this project was the creation of a web app for undergraduate CIS/BDA students which allows them to search for jobs based on criteria that are not always directly available with the average job search engine. This includes technical skills, soft skills, location and industry. This creates a more focused way for these students to search for jobs using an application that also attempts to exclude positions that are looking for very experienced employees. The activities used for this project were chosen in attempt to make as many of the processes as automatable as possible.
This was achieved by first using offline explorer, an application that can download websites, to gather job postings from Dice.com that were searched by a pre-defined list of technical skills. Next came the parsing of the downloaded postings to extract and clean the data that was required and filling a database with that cleaned data. Then the companies were matched up with their corresponding industries. This was done using their NAICS (North American Industry Classification System) codes. The descriptions were then analyzed, and a group of soft skills was chosen based on the results of Word2Vec (a group of models that assists in creating word embeddings). A master table was then created by combining all of the tables in the database. The master table was then filtered down to exclude posts that required too much experience. Lastly, the web app was created using node.js as the back-end. This web app allows the user to choose their desired criteria and navigate through the postings that meet their criteria.
ContributorsHenry, Alfred (Author) / Darcy, David (Thesis director) / Moser, Kathleen (Committee member) / Department of Information Systems (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05