Matching Items (7)
136409-Thumbnail Image.png
Description
Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public.

Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public. With this motivation, this paper develops a model for trends leveraging previous work with k-nearest-neighbors and dynamic time warping. The development of this model provides insight into the length and features of trends, and successfully generalizes to identify 74.3% of trends in the time period of interest. The model developed in this work provides understanding into why par- ticular words trend on Twitter.
ContributorsMarshall, Grant A (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
Description
Social media users are inundated with information. Especially on Instagram--a social media service based on sharing photos--where for many users, missing important posts is a common issue. By creating a recommendation system which learns each user's preference and gives them a curated list of posts, the information overload issue can

Social media users are inundated with information. Especially on Instagram--a social media service based on sharing photos--where for many users, missing important posts is a common issue. By creating a recommendation system which learns each user's preference and gives them a curated list of posts, the information overload issue can be mediated in order to enhance the user experience for Instagram users. This paper explores methods for creating such a recommendation system. The proposed method employs a learning model called ``Factorization Machines" which combines the advantages of linear models and latent factor models. In this work I derived features from Instagram post data, including the image, social data about the post, and information about the user who created the post. I also collect user-post interaction data describing which users ``liked" which posts, and this was used in models leveraging latent factors. The proposed model successfully improves the rate of interesting content seen by the user by anywhere from 2 to 12 times.
ContributorsFakhri, Kian (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134317-Thumbnail Image.png
Description
Social media is used by people every day to discuss the nuances of their lives. Major League Baseball (MLB) is a popular sport in the United States, and as such has generated a great deal of activity on Twitter. As fantasy baseball continues to grow in popularity, so does the

Social media is used by people every day to discuss the nuances of their lives. Major League Baseball (MLB) is a popular sport in the United States, and as such has generated a great deal of activity on Twitter. As fantasy baseball continues to grow in popularity, so does the research into better algorithms for picking players. Most of the research done in this area focuses on improving the prediction of a player's individual performance. However, the crowd-sourcing power afforded by social media may enable more informed predictions about players' performances. Players are chosen by popularity and personal preferences by most amateur gamblers. While some of these trends (particularly the long-term ones) are captured by ranking systems, this research was focused on predicting the daily spikes in popularity (and therefore price or draft order) by comparing the number of mentions that the player received on Twitter compared to their previous mentions. In doing so, it was demonstrated that improved fantasy baseball predictions can be made through leveraging social media data.
ContributorsRuskin, Lewis John (Author) / Liu, Huan (Thesis director) / Montgomery, Douglas (Committee member) / Morstatter, Fred (Committee member) / Industrial, Systems (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
155717-Thumbnail Image.png
Description
Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people

Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people at scales thought unimaginable only a few years ago.

Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect.

The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them.

The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study.
ContributorsMorstatter, Fred (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Maciejewski, Ross (Committee member) / Carley, Kathleen M. (Committee member) / Arizona State University (Publisher)
Created2017
Description
Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove

Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove these accounts. Despite this acknowledged issue, bot presence continues to remain on social media networks. So, it has now become necessary to monitor different bots over time to identify changes in their activities or domain. Since monitoring individual accounts is not feasible, because the bots may get suspended or deleted, bots should be observed in smaller groups, based on their characteristics, as types. Yet, most of the existing research on social media bot detection is focused on labeling bot accounts by only distinguishing them from human accounts and may ignore differences between individual bot accounts. The consideration of these bots' types may be the best solution for researchers and social media companies alike as it is in both of their best interests to study these types separately. However, up until this point, bot categorization has only been theorized or done manually. Thus, the goal of this research is to automate this process of grouping bots by their respective types. To accomplish this goal, the author experimentally demonstrates that it is possible to use unsupervised machine learning to categorize bots into types based on the proposed typology by creating an aggregated dataset, subsequent to determining that the accounts within are bots, and utilizing an existing typology for bots. Having the ability to differentiate between types of bots automatically will allow social media experts to analyze bot activity, from a new perspective, on a more granular level. This way, researchers can identify patterns related to a given bot type's behaviors over time and determine if certain detection methods are more viable for that type.
ContributorsDavis, Matthew William (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Morstatter, Fred (Committee member) / Arizona State University (Publisher)
Created2019
128907-Thumbnail Image.png
Description

Twitter is a major social media platform in which users send and read messages (“tweets”) of up to 140 characters. In recent years this communication medium has been used by those affected by crises to organize demonstrations or find relief. Because traffic on this media platform is extremely heavy, with

Twitter is a major social media platform in which users send and read messages (“tweets”) of up to 140 characters. In recent years this communication medium has been used by those affected by crises to organize demonstrations or find relief. Because traffic on this media platform is extremely heavy, with hundreds of millions of tweets sent every day, it is difficult to differentiate between times of turmoil and times of typical discussion. In this work we present a new approach to addressing this problem. We first assess several possible “thermostats” of activity on social media for their effectiveness in finding important time periods. We compare methods commonly found in the literature with a method from economics. By combining methods from computational social science with methods from economics, we introduce an approach that can effectively locate crisis events in the mountains of data generated on Twitter. We demonstrate the strength of this method by using it to locate the social events relating to the Occupy Wall Street movement protests at the end of 2011.

ContributorsKenett, Dror Y. (Author) / Morstatter, Fred (Author) / Stanley, H. Eugene (Author) / Liu, Huan (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-07-30