Matching Items (4)
Filtering by

Clear all filters

157057-Thumbnail Image.png
Description
The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information.

The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information. As witnessed in recent incidents of misinformation, it escalates quickly and can impact social media users with undesirable consequences and wreak havoc instantaneously. Different from some existing research in psychology and social sciences about misinformation, social media platforms pose unprecedented challenges for misinformation detection. First, intentional spreaders of misinformation will actively disguise themselves. Second, content of misinformation may be manipulated to avoid being detected, while abundant contextual information may play a vital role in detecting it. Third, not only accuracy, earliness of a detection method is also important in containing misinformation from being viral. Fourth, social media platforms have been used as a fundamental data source for various disciplines, and these research may have been conducted in the presence of misinformation. To tackle the challenges, we focus on developing machine learning algorithms that are robust to adversarial manipulation and data scarcity.

The main objective of this dissertation is to provide a systematic study of misinformation detection in social media. To tackle the challenges of adversarial attacks, I propose adaptive detection algorithms to deal with the active manipulations of misinformation spreaders via content and networks. To facilitate content-based approaches, I analyze the contextual data of misinformation and propose to incorporate the specific contextual patterns of misinformation into a principled detection framework. Considering its rapidly growing nature, I study how misinformation can be detected at an early stage. In particular, I focus on the challenge of data scarcity and propose a novel framework to enable historical data to be utilized for emerging incidents that are seemingly irrelevant. With misinformation being viral, applications that rely on social media data face the challenge of corrupted data. To this end, I present robust statistical relational learning and personalization algorithms to minimize the negative effect of misinformation.
ContributorsWu, Liang (Author) / Liu, Huan (Thesis advisor) / Tong, Hanghang (Committee member) / Doupe, Adam (Committee member) / Davison, Brian D. (Committee member) / Arizona State University (Publisher)
Created2019
Description
Misinformation, defined as incorrect or misleading information, has been around since the beginning of time. However, the rise of technology and widespread use of social media has allowed misinformation to evolve and gain more traction. This study aims to examine health and political misinformation within the contexts of the COVID-19

Misinformation, defined as incorrect or misleading information, has been around since the beginning of time. However, the rise of technology and widespread use of social media has allowed misinformation to evolve and gain more traction. This study aims to examine health and political misinformation within the contexts of the COVID-19 pandemic and the 2020 U.S. Presidential Election. Utilizing samples of misinformation from the 45th president of the United States, I analyzed the levels of engagement that this misinformation received on the social media platform X, formerly known as Twitter. I also examined how various Google search query trends changed over time in response to this misinformation. Then, I categorized the data into misleading statistics, misrepresentations of opinions as facts, or completely false content. Lastly, I looked into the physical responses that resulted from the spread of such misinformation. My findings of this case study showed that misinformation received significantly more attention than other social media posts, as evidenced by increased Google searches related to the topics and higher levels of likes and retweets on misinformative Tweets during the specified periods. Furthermore, the former president employed all three types of misinformation, with misleading statistics most prevalent in the health misinformation sample and misrepresentations of opinions as facts most prevalent in the political misinformation sample. The repercussions of this misinformation encompassed individuals ingesting unsafe products, decreased trust in the electoral process, and a violent insurrection at the U.S. Capitol. Despite the existing research in this field, there remains much more to be uncovered regarding the vast amount of misinformation circulating on the Internet.
ContributorsShah, Sona (Author) / Boghrati, Reihane (Thesis director) / Simeone, Michael (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor)
Created2023-12
ContributorsShah, Sona (Author) / Boghrati, Reihane (Thesis director) / Simeone, Michael (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor)
Created2023-12
ContributorsShah, Sona (Author) / Boghrati, Reihane (Thesis director) / Simeone, Michael (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor)
Created2023-12