Matching Items (4)
Filtering by

Clear all filters

133824-Thumbnail Image.png
Description
Autonomous vehicles (AV) are capable of producing massive amounts of real time and precise data. This data has the ability to present new business possibilities across a vast amount of markets. These possibilities range from simple applications to unprecedented use cases. With this in mind, the three main objectives we

Autonomous vehicles (AV) are capable of producing massive amounts of real time and precise data. This data has the ability to present new business possibilities across a vast amount of markets. These possibilities range from simple applications to unprecedented use cases. With this in mind, the three main objectives we sought to accomplish in our thesis were to: 1. Understand if there is monetization potential in autonomous vehicle data 2. Create a financial model of what detailing the viability of AV data monetization 3. Discover how a particular company (Company X) can take advantage of this opportunity, and outline how that company might access this autonomous vehicle data.
ContributorsCarlton, Corrine (Co-author) / Clark, Rachael (Co-author) / Quintana, Alex (Co-author) / Shapiro, Brandon (Co-author) / Sigrist, Austin (Co-author) / Simonson, Mark (Thesis director) / Reber, Kevin (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
164182-Thumbnail Image.png
Description

Through findings from interviews, a survey, and personally learning automation software we think automation will continue to grow in the accounting industry in the coming years. Accountants see software as something that makes them more efficient and firms are doing a good job training their employees on how to use

Through findings from interviews, a survey, and personally learning automation software we think automation will continue to grow in the accounting industry in the coming years. Accountants see software as something that makes them more efficient and firms are doing a good job training their employees on how to use these new software tools. Our interviewed accountants say that automation saves them time that can be used to work on other things. By learning Alteryx, an automation tool, we saw these time savings firsthand.

ContributorsDiNuto, Michael (Author) / Shillingburg, Alec (Co-author) / Dawson, Greg (Thesis director) / Garverick, Michael (Committee member) / Barrett, The Honors College (Contributor) / School of Accountancy (Contributor) / Department of Information Systems (Contributor)
Created2022-05
Description

Visualizations can be an incredibly powerful tool for communicating data. Data visualizations can summarize large data sets into one view, allow for easy comparisons between variables, and show trends or relationships in data that cannot be seen by looking at the raw data. Empirical information and by extension data visualizations

Visualizations can be an incredibly powerful tool for communicating data. Data visualizations can summarize large data sets into one view, allow for easy comparisons between variables, and show trends or relationships in data that cannot be seen by looking at the raw data. Empirical information and by extension data visualizations are often seen as objective and honest. Unfortunately, data visualizations are susceptible to errors that may make them misleading. When visualizations are made for public audiences that do not have the statistical training or subject matter expertise to identify misleading or misrepresented data, these errors can have very negative effects. There is a good deal of research on how best to create guidelines for creating or systems for evaluating data visualizations. Many of the existing guidelines have contradicting approaches to designing visuals or they stress that best practices depend on the context. The goal of this work is to define the guidelines for making visualizations in the context of a public audience and show how context-specific guidelines can be used to effectively evaluate and critique visualizations. The guidelines created here are a starting point to show that there is a need for best practices that are specific to public media. Data visualization for the public lies at the intersection of statistics, graphic design, journalism, cognitive science, and rhetoric. Because of this, future conversations to create guidelines should include representatives of all these fields.

ContributorsSteele, Kayleigh (Author) / Martin, Thomas (Thesis director) / Woodall, Gina (Committee member) / Barrett, The Honors College (Contributor) / School of Politics and Global Studies (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05