Matching Items (6)
Filtering by

Clear all filters

135637-Thumbnail Image.png
Description
The foundations of legacy media, especially the news media, are not as strong as they once were. A digital revolution has changed the operation models for and journalistic organizations are trying to find their place in the new market. This project is intended to analyze the effects of new/emerging technologies

The foundations of legacy media, especially the news media, are not as strong as they once were. A digital revolution has changed the operation models for and journalistic organizations are trying to find their place in the new market. This project is intended to analyze the effects of new/emerging technologies on the journalism industry. Five different categories of technology will be explored. They are as follows: the semantic web, automation software, data analysis and aggregators, virtual reality and drone journalism. The potential of these technologies will be broken up according to four guidelines, ethical implications, effects on the reportorial process, business impacts and changes to the consumer experience. Upon my examination, it is apparent that no single technology will offer the journalism industry the remedy it has been searching for. Some combination of emerging technologies however, may form the basis for the next generation of news. Findings are presented on a website that features video, visuals, linked content, and original graphics. Website found at http://www.explorenewstech.com/
Created2016-05
132959-Thumbnail Image.png
Description
The goal of this thesis is to conduct a descriptive analysis of the gross domestic product (GDP) sector composition of countries around the world and their respective levels of economic development with consideration of their geographic locations, economic growth over time, and their economic sizes. This analysis will be centered

The goal of this thesis is to conduct a descriptive analysis of the gross domestic product (GDP) sector composition of countries around the world and their respective levels of economic development with consideration of their geographic locations, economic growth over time, and their economic sizes. This analysis will be centered around exploring the differences of the GDP composition of countries at different levels of development, testing the consensus that developed countries tend to be focused on the services sector in comparison to less developed ones, who trend towards focus on the agricultural one. These findings will be primarily attained through use of data interpretation and regression analysis utilizing the statistical software packages of Stata and Excel. Results and analysis are to be supported by powerful data visualizations created in Tableau and the careful examination of said visualizations.
Due to the sheer amount of macro-economic factors and the case specific incidences involved in the determination of a country’s level of economic development, this thesis will focus entirely on the descriptive analysis of the relationship between a country’s GDP sector composition within the agricultural, industrial, and services sectors and their level of economic development measured in GDP per capita. This study will explore the relationship between GDP per capita and geographic regions, growth over time, and economic size as well. These relationships will be used to determine if said factors need to be controlled for when analyzing the relationship between a country’s sector composition and its level of development. A better understanding of what countries look like at all levels of development helps build a complete picture of a what makes a country successful and could be used in future studies that seek to predict economic success based on more and/or separate variables.
ContributorsStojsin, Rastko (Author) / Goegan, Brian (Thesis director) / Lopez, Andres Diaz (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133499-Thumbnail Image.png
Description
With growing levels of income inequality in the United States, it remains as important as ever to ensure indispensable public services are readily available to all members of society. This paper investigates four forms of public services (schools, libraries, fire stations, and police stations), first by researching the background of

With growing levels of income inequality in the United States, it remains as important as ever to ensure indispensable public services are readily available to all members of society. This paper investigates four forms of public services (schools, libraries, fire stations, and police stations), first by researching the background of these services and their relation to poverty, and then by conducting geospatial and regression analysis. The author uses Esri's ArcGIS Pro software to quantify the proximity to public services from urban American neighborhoods (census tracts in the cities of Phoenix and Chicago). Afterwards, the measures indicating proximity are compared to the socioeconomic statuses of neighborhoods using regression analysis. The results indicate that pure proximity to these four services is not necessarily correlated to socioeconomic status. While the paper does uncover some correlations, such as a relationship between school quality and socioeconomic status, the majority of the findings negate the author's hypothesis and show that, in Phoenix and Chicago, there is not much discrepancy between neighborhoods and the extent to which they are able to access vital government-funded services.
ContributorsNorbury, Adam Charles (Author) / Simon, Alan (Thesis director) / Simon, Phil (Committee member) / Department of Information Systems (Contributor) / Department of English (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135105-Thumbnail Image.png
Description
Academic integrity policies coded specifically for journalism schools or departments are devised for the purpose of fostering a realistic, informative learning environment. Plagiarism and fabrication are two of the most egregious errors of judgment a journalist can commit, and journalism schools and departments address these errors through their academic integrity

Academic integrity policies coded specifically for journalism schools or departments are devised for the purpose of fostering a realistic, informative learning environment. Plagiarism and fabrication are two of the most egregious errors of judgment a journalist can commit, and journalism schools and departments address these errors through their academic integrity policies. Some schools take a zero-tolerance approach, often expelling the student after the first or second violation, while other schools take a tolerant approach, in which a student is permitted at least three violations before suspension is considered. In a time where plagiarizing and fabricating stories has never been easier to commit and never easier to catch, students must be prepared to understand plagiarism and fabrication with multimedia elements, such as video, audio, and photos. In this project, journalism academic integrity codes were gathered from across the U.S. and designated to a zero-tolerance, semi-tolerant or tolerant category the researcher designed in order to determine what is preparing students most for the real journalism world, and to suggest how some policies could improve themselves.
ContributorsRoney, Claire Marie (Author) / McGuire, Tim (Thesis director) / Russomanno, Joseph (Committee member) / W. P. Carey School of Business (Contributor) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
158416-Thumbnail Image.png
Description
Plagiarism is a huge problem in a learning environment. In programming classes especially, plagiarism can be hard to detect as source codes' appearance can be easily modified without changing the intent through simple formatting changes or refactoring. There are a number of plagiarism detection tools that attempt to encode knowledge

Plagiarism is a huge problem in a learning environment. In programming classes especially, plagiarism can be hard to detect as source codes' appearance can be easily modified without changing the intent through simple formatting changes or refactoring. There are a number of plagiarism detection tools that attempt to encode knowledge about the programming languages they support in order to better detect obscured duplicates. Many such tools do not support a large number of languages because doing so requires too much code and therefore too much maintenance. It is also difficult to add support for new languages because each language is vastly different syntactically. Tools that are more extensible often do so by reducing the features of a language that are encoded and end up closer to text comparison tools than structurally-aware program analysis tools.

Kitsune attempts to remedy these issues by tying itself to Antlr, a pre-existing language recognition tool with over 200 currently supported languages. In addition, it provides an interface through which generic manipulations can be applied to the parse tree generated by Antlr. As Kitsune relies on language-agnostic structure modifications, it can be adapted with minimal effort to provide plagiarism detection for new languages. Kitsune has been evaluated for 10 of the languages in the Antlr grammar repository with success and could easily be extended to support all of the grammars currently developed by Antlr or future grammars which are developed as new languages are written.
ContributorsMonroe, Zachary Lynn (Author) / Bansal, Ajay (Thesis advisor) / Lindquist, Timothy (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2020
161655-Thumbnail Image.png
Description
There has been a substantial development in the field of data transmission in the last two decades. One does not have to wait much for a high-definition video to load on the systems anymore. Data compression is one of the most important technologies that helped achieve this seamless data transmission

There has been a substantial development in the field of data transmission in the last two decades. One does not have to wait much for a high-definition video to load on the systems anymore. Data compression is one of the most important technologies that helped achieve this seamless data transmission experience. It helps to store or send more data using less memory or network resources. However, it appears that there is a limit on the amount of compression that can be achieved with the existing lossless data compression techniques because they rely on the frequency of characters or set of characters in the data. The thesis proposes a lossless data compression technique in which the data is compressed by representing it as a set of parameters that can reproduce the original data without any loss when given to the corresponding mathematical equation. The mathematical equation used in the thesis is the sum of the first N terms in a geometric series. Various changes are made to this mathematical equation so that any given data can be compressed and decompressed. According to the proposed technique, the whole data is taken as a single decimal number and replaced with one of the terms of the used equation. All the other terms of the equation are computed and stored as a compressed file. The performance of the developed technique is evaluated in terms of compression ratio, compression time and decompression time. The evaluation metrics are then compared with the other existing techniques of the same domain.
ContributorsGrewal, Karandeep Singh (Author) / Gonzalez Sanchez, Javier (Thesis advisor) / Bansal, Ajay (Committee member) / Findler, Michael (Committee member) / Arizona State University (Publisher)
Created2021