Matching Items (6)
Filtering by

Clear all filters

133914-Thumbnail Image.png
Description
This paper describes the research done to quantify the relationship between external air temperature and energy consumption and internal air temperature and energy consumption. The study was conducted on a LEED Gold certified building, College Avenue Commons, located on Arizona State University's Tempe campus. It includes information on the background

This paper describes the research done to quantify the relationship between external air temperature and energy consumption and internal air temperature and energy consumption. The study was conducted on a LEED Gold certified building, College Avenue Commons, located on Arizona State University's Tempe campus. It includes information on the background of previous studies in the area, some that agree with the research hypotheses and some that take a different path. Real-time data was collected hourly for energy consumption and external air temperature. Intermittent internal air temperature was collected by undergraduate researcher, Charles Banke. Regression analysis was used to prove two research hypotheses. The authors found no correlation between external air temperature and energy consumption, nor did they find a relationship between internal air temperature and energy consumption. This paper also includes recommendations for future work to improve the study.
ContributorsBanke, Charles Michael (Author) / Chong, Oswald (Thesis director) / Parrish, Kristen (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135445-Thumbnail Image.png
Description
While former New York Yankees pitcher Goose Gossage unleashed his tirade on the deterioration of the unwritten rules of baseball and nerds ruining the sport about halfway through my writing of the paper, sentiments like his were inspiration for my topic: the evolution of statistics and data in baseball. By

While former New York Yankees pitcher Goose Gossage unleashed his tirade on the deterioration of the unwritten rules of baseball and nerds ruining the sport about halfway through my writing of the paper, sentiments like his were inspiration for my topic: the evolution of statistics and data in baseball. By telling the story of how baseball data and statistics have evolved, my goal was to also demonstrate how they have been intertwined since the beginning—which would essentially mean that nerds have always been ruining the sport (if you subscribe to that kind of thought).

In the quest to showcase this, it was necessary to document how baseball prospers from numbers and numbers prosper from baseball. The relationship between the two is mutualistic. Furthermore, an all-encompassing historical look at how data and statistics in baseball have matured was a critical portion of the paper. With a metric such as batting average going from a radical new measure that posed a threat to the status quo, to a fiercely cherished statistic that was suddenly being unseated by advanced analytics, it shows the creation of new and destruction of old has been incessant. Innovators like Pete Palmer, Dick Cramer and Bill James played a large role in this process in the 1980s. Computers aided their effort and when paired with the Internet, unleashed the ability to crunch data to an even larger sector of the population. The unveiling of Statcast at the commencement of the 2015 season showed just how much potential there is for measuring previously unquantifiable baseball acts.

Essentially, there will always be people who mourn the presence of data and statistics in baseball. Despite this, the evolution story indicates baseball and numbers will be intertwined into the future, likely to an even greater extent than ever before, as technology and new philosophies become increasingly integrated into front offices and clubhouses.
ContributorsGarcia, Jacob Michael (Author) / Kurland, Brett (Thesis director) / Doig, Stephen (Committee member) / Jackson, Victoria (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
This thesis documentary film takes a look at the dysfunctional but ongoing relationship between Twitter and sports journalism. The foundation of this relationship's dysfunction is what I have coined as the Twitter Outrage Cycle. In this cycle a sports broadcasting personality comments on a matter while on-air. Next, the program's

This thesis documentary film takes a look at the dysfunctional but ongoing relationship between Twitter and sports journalism. The foundation of this relationship's dysfunction is what I have coined as the Twitter Outrage Cycle. In this cycle a sports broadcasting personality comments on a matter while on-air. Next, the program's audience where the comments were spoken becomes offended by the statement. After that, the offended audience members express their outrage on social media, most namely Twitter. Finally the cycle culminates with the public outrage pressuring networks and its executives to either suspended or fire the individual that said the controversial statements. This cycle began to occur on a more consistent basis starting in 2012. It became such a regular occurrence that many on-air talent figures have noticed and taken precautionary measures to either avoid or confront the Outrage Cycles. This documentary uses the voice of seven figures within the sports media and online interaction forum. Notable using the voices of three notable individuals that currently have a prominent voice in sports journalism. As well as a neutral social media curator who clearly explains the psyche behind these outraged viewer's mindsets. Through these four main voices their ideals and opinions on the matter weave together, disagree with each other at times but ultimately help the viewer come to an understanding of why these Outrage Cycles occur and what needs to be done in order for them to cease. We Should Talk: The Relationship Between Twitter and Sports Journalism is a documentary film that looks to illustrate a seemingly minimal part of many people's lives that when taken into perspective many people look at in a very serious light.
ContributorsNeely, Cammeron Allen Douglas (Author) / Kurland, Brett (Thesis director) / Fergus, Tom (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
155683-Thumbnail Image.png
Description
The solar energy sector has been growing rapidly over the past decade. Growth in renewable electricity generation using photovoltaic (PV) systems is accompanied by an increased awareness of the fault conditions developing during the operational lifetime of these systems. While the annual energy losses caused by faults in PV systems

The solar energy sector has been growing rapidly over the past decade. Growth in renewable electricity generation using photovoltaic (PV) systems is accompanied by an increased awareness of the fault conditions developing during the operational lifetime of these systems. While the annual energy losses caused by faults in PV systems could reach up to 18.9% of their total capacity, emerging technologies and models are driving for greater efficiency to assure the reliability of a product under its actual application. The objectives of this dissertation consist of (1) reviewing the state of the art and practice of prognostics and health management for the Direct Current (DC) side of photovoltaic systems; (2) assessing the corrosion of the driven posts supporting PV structures in utility scale plants; and (3) assessing the probabilistic risk associated with the failure of polymeric materials that are used in tracker and fixed tilt systems.

As photovoltaic systems age under relatively harsh and changing environmental conditions, several potential fault conditions can develop during the operational lifetime including corrosion of supporting structures and failures of polymeric materials. The ability to accurately predict the remaining useful life of photovoltaic systems is critical for plants ‘continuous operation. This research contributes to the body of knowledge of PV systems reliability by: (1) developing a meta-model of the expected service life of mounting structures; (2) creating decision frameworks and tools to support practitioners in mitigating risks; (3) and supporting material selection for fielded and future photovoltaic systems. The newly developed frameworks were validated by a global solar company.
ContributorsChokor, Abbas (Author) / El Asmar, Mounir (Thesis advisor) / Chong, Oswald (Committee member) / Ernzen, James (Committee member) / Arizona State University (Publisher)
Created2017
Description

Visualizations can be an incredibly powerful tool for communicating data. Data visualizations can summarize large data sets into one view, allow for easy comparisons between variables, and show trends or relationships in data that cannot be seen by looking at the raw data. Empirical information and by extension data visualizations

Visualizations can be an incredibly powerful tool for communicating data. Data visualizations can summarize large data sets into one view, allow for easy comparisons between variables, and show trends or relationships in data that cannot be seen by looking at the raw data. Empirical information and by extension data visualizations are often seen as objective and honest. Unfortunately, data visualizations are susceptible to errors that may make them misleading. When visualizations are made for public audiences that do not have the statistical training or subject matter expertise to identify misleading or misrepresented data, these errors can have very negative effects. There is a good deal of research on how best to create guidelines for creating or systems for evaluating data visualizations. Many of the existing guidelines have contradicting approaches to designing visuals or they stress that best practices depend on the context. The goal of this work is to define the guidelines for making visualizations in the context of a public audience and show how context-specific guidelines can be used to effectively evaluate and critique visualizations. The guidelines created here are a starting point to show that there is a need for best practices that are specific to public media. Data visualization for the public lies at the intersection of statistics, graphic design, journalism, cognitive science, and rhetoric. Because of this, future conversations to create guidelines should include representatives of all these fields.

ContributorsSteele, Kayleigh (Author) / Martin, Thomas (Thesis director) / Woodall, Gina (Committee member) / Barrett, The Honors College (Contributor) / School of Politics and Global Studies (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05