Matching Items (3)
Filtering by

Clear all filters

157057-Thumbnail Image.png
Description
The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information.

The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information. As witnessed in recent incidents of misinformation, it escalates quickly and can impact social media users with undesirable consequences and wreak havoc instantaneously. Different from some existing research in psychology and social sciences about misinformation, social media platforms pose unprecedented challenges for misinformation detection. First, intentional spreaders of misinformation will actively disguise themselves. Second, content of misinformation may be manipulated to avoid being detected, while abundant contextual information may play a vital role in detecting it. Third, not only accuracy, earliness of a detection method is also important in containing misinformation from being viral. Fourth, social media platforms have been used as a fundamental data source for various disciplines, and these research may have been conducted in the presence of misinformation. To tackle the challenges, we focus on developing machine learning algorithms that are robust to adversarial manipulation and data scarcity.

The main objective of this dissertation is to provide a systematic study of misinformation detection in social media. To tackle the challenges of adversarial attacks, I propose adaptive detection algorithms to deal with the active manipulations of misinformation spreaders via content and networks. To facilitate content-based approaches, I analyze the contextual data of misinformation and propose to incorporate the specific contextual patterns of misinformation into a principled detection framework. Considering its rapidly growing nature, I study how misinformation can be detected at an early stage. In particular, I focus on the challenge of data scarcity and propose a novel framework to enable historical data to be utilized for emerging incidents that are seemingly irrelevant. With misinformation being viral, applications that rely on social media data face the challenge of corrupted data. To this end, I present robust statistical relational learning and personalization algorithms to minimize the negative effect of misinformation.
ContributorsWu, Liang (Author) / Liu, Huan (Thesis advisor) / Tong, Hanghang (Committee member) / Doupe, Adam (Committee member) / Davison, Brian D. (Committee member) / Arizona State University (Publisher)
Created2019
148479-Thumbnail Image.png
Description

In the past year, considerable misinformation about the COVID-19 pandemic has circulated on social media platforms. Faced with this pervasive issue, it is important to identify the extent to which people are able to spot misinformation on social media and ways to improve people’s accuracy in spotting misinformation. Therefore, the

In the past year, considerable misinformation about the COVID-19 pandemic has circulated on social media platforms. Faced with this pervasive issue, it is important to identify the extent to which people are able to spot misinformation on social media and ways to improve people’s accuracy in spotting misinformation. Therefore, the current study aims to investigate people’s accuracy in spotting misinformation, the effectiveness of a game-based intervention, and the role of political affiliation in spotting misinformation. In this study, 235 participants played a misinformation game in which they evaluated COVID-19-related tweets and indicated whether or not they thought each of the tweets contained misinformation. Misinformation accuracy was measured using game scores, which were based on the correct identification of misinformation. Findings revealed that participants’ beliefs about how accurate they are at spotting misinformation about COVID-19 did not predict their actual accuracy. Participants’ accuracy improved after playing the game, but democrats were more likely to improve than republicans.

ContributorsKang, Rachael (Author) / Kwan, Virginia (Thesis director) / Corbin, William (Committee member) / Cohen, Adam (Committee member) / Bunker, Cameron (Committee member) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
166185-Thumbnail Image.png
Description
Increasing misinformation in social media channels has become more prevalent since the beginning of the COVID-19 pandemic as countless myths and rumors have circulated over the internet. This misinformation has potentially lethal consequences as many people make important health decisions based on what they read online, thus creating an urgent

Increasing misinformation in social media channels has become more prevalent since the beginning of the COVID-19 pandemic as countless myths and rumors have circulated over the internet. This misinformation has potentially lethal consequences as many people make important health decisions based on what they read online, thus creating an urgent need to combat it. Although many Natural Language Processing (NLP) techniques have been used to identify misinformation in text, prompt-based methods are under-studied for this task. This work explores prompt learning to classify COVID-19 related misinformation. To this extent, I analyze the effectiveness of this proposed approach on four datasets. Experimental results show that prompt-based classification achieves on average ~13% and ~6% improvement compared to a single-task and multi-task model, respectively. Moreover, analysis shows that prompt-based models can achieve competitive results compared to baselines in a few-shot learning scenario.
ContributorsBrown, Clinton (Author) / Baral, Chitta (Thesis director) / Walker, Shawn (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05