Matching Items (4)
Filtering by

Clear all filters

157057-Thumbnail Image.png
Description
The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information.

The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information. As witnessed in recent incidents of misinformation, it escalates quickly and can impact social media users with undesirable consequences and wreak havoc instantaneously. Different from some existing research in psychology and social sciences about misinformation, social media platforms pose unprecedented challenges for misinformation detection. First, intentional spreaders of misinformation will actively disguise themselves. Second, content of misinformation may be manipulated to avoid being detected, while abundant contextual information may play a vital role in detecting it. Third, not only accuracy, earliness of a detection method is also important in containing misinformation from being viral. Fourth, social media platforms have been used as a fundamental data source for various disciplines, and these research may have been conducted in the presence of misinformation. To tackle the challenges, we focus on developing machine learning algorithms that are robust to adversarial manipulation and data scarcity.

The main objective of this dissertation is to provide a systematic study of misinformation detection in social media. To tackle the challenges of adversarial attacks, I propose adaptive detection algorithms to deal with the active manipulations of misinformation spreaders via content and networks. To facilitate content-based approaches, I analyze the contextual data of misinformation and propose to incorporate the specific contextual patterns of misinformation into a principled detection framework. Considering its rapidly growing nature, I study how misinformation can be detected at an early stage. In particular, I focus on the challenge of data scarcity and propose a novel framework to enable historical data to be utilized for emerging incidents that are seemingly irrelevant. With misinformation being viral, applications that rely on social media data face the challenge of corrupted data. To this end, I present robust statistical relational learning and personalization algorithms to minimize the negative effect of misinformation.
ContributorsWu, Liang (Author) / Liu, Huan (Thesis advisor) / Tong, Hanghang (Committee member) / Doupe, Adam (Committee member) / Davison, Brian D. (Committee member) / Arizona State University (Publisher)
Created2019
135911-Thumbnail Image.png
Description
Media influences the way people understand the world around them, and today's digital media environment is saturated with information. Online media consumers are experiencing an information overload, and many find it difficult to determine which messages to trust. Media consumers between the ages of 18 and 34 are increasingly turning

Media influences the way people understand the world around them, and today's digital media environment is saturated with information. Online media consumers are experiencing an information overload, and many find it difficult to determine which messages to trust. Media consumers between the ages of 18 and 34 are increasingly turning to social media, especially Facebook, for news and information. However, the nature of information exchange on these networks makes these users prone to seeing and sharing misleading, inaccurate or unverified information. This project is an examination of how misinformation spreads on social media platforms, and how users can utilize media literacy techniques to surround themselves with trustworthy information on social media, as well as develop skills to determine whether information is credible. By examining the motivations behind sharing information on social media, and the ways in which Millennials interact with misinformation on these platforms, this study aims to help users combat the spread of misleading information. This project determines techniques and resources that media consumers can use to turn their social media networks into healthy, trustworthy information environments. View the online component of this project at http://lindsaytaylorrobin.wix.com/info-overload
ContributorsRobinson, Lindsay T (Author) / Gillmor, Dan (Thesis director) / Roschke, Kristy (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
189275-Thumbnail Image.png
Description
The unprecedented amount and sources of information during the COVID-19 pandemic resulted in an indiscriminate level of misinformation that was confusing and compromised healthcare access and delivery. The World Health Organization (WHO) called this an ‘infodemic’, and conspiracy theories and fake news about COVID-19, plagued public health efforts to contain

The unprecedented amount and sources of information during the COVID-19 pandemic resulted in an indiscriminate level of misinformation that was confusing and compromised healthcare access and delivery. The World Health Organization (WHO) called this an ‘infodemic’, and conspiracy theories and fake news about COVID-19, plagued public health efforts to contain the COVID-19 pandemic. National and international public health priorities expanded to counter misinformation. As a multi-disciplinary study encompassing expertise from public health, informatics, and communication, this research focused on eliciting strategies to better understand and combat misinformation on COVID-19. The study hypotheses is that 1) factors influencing vaccine-acceptance like socio-demographic factors, COVID-19 knowledge, trust in institutions, and media related factors could be leveraged for public health education and intervention; and 2) individuals with a high level of knowledge regarding COVID-19 prevention and control have unique behaviors and practices, like nuanced media literacy and validation skills that could be promoted to improve vaccine acceptance and preventative health behaviors. In this biphasic study an initial survey of 1,498 individuals sampled from Amazon Mechanical Turk (MTurk) assessed socio-demographic factors, an 18-item test of COVID-19 knowledge, trust in healthcare stakeholders, and measures of media literacy and consumption. Subsequently, using the Positive Deviance Framework, a diverse subset of 25 individuals with high COVID-19 knowledge scores were interviewed to identify these deviants’ information and media practices that helped avoid COVID-19 misinformation. Access to primary care, higher educational attainment and living in urban communities were positive socio-demographic predictors of COVID-19 vaccine acceptance emphasizing the need to invest in education and rural health. High COVID-19 knowledge and trust in government and health providers were also critical factors and associated with a higher level of trust in science and credible information sources like the Centers for Disease Control (CDC) and health experts. Positive deviants practiced media literacy skills that emphasized checking sources for scientific basis as well as hidden bias; cross-checking information across multiple sources and verifying health information with scientific experts. These identified information validation and confirmation practices may be useful in educating the public and designing strategies to better protect communities against harmful health misinformation.
ContributorsSivanandam, Shalini (Author) / Doebbeling, Bradley (Thesis advisor) / Koskan, Alexis (Committee member) / Roschke, Kristy (Committee member) / Chung, Yunro (Committee member) / Arizona State University (Publisher)
Created2023
193041-Thumbnail Image.png
Description
Why, how, and to what effect do states use disinformation in their foreign policies? Inductive accounts variously address those questions, but International Relations has yet to offer a theoretical account. I propose Putnam’s two-level game (1988) as a candidate theory. A rationalist approach that jettisons the unitary actor assumption, the

Why, how, and to what effect do states use disinformation in their foreign policies? Inductive accounts variously address those questions, but International Relations has yet to offer a theoretical account. I propose Putnam’s two-level game (1988) as a candidate theory. A rationalist approach that jettisons the unitary actor assumption, the model accounts for previous accounts’ observations and suggests their interrelation and four overarching objectives. The model also generates novel implications about disinformation in foreign policy, two of which I test via separate survey experiments.The primary implication is that states can use disinformation to encourage polarization and in turn can reverberate into commitment problems. A survey experiment tests the first link in that chain, arguing that disinformation’s effects could be underestimated due to focus on belief outcomes; potential selection bias in active-exposure studies; and probable pre-treatment effects. It hypothesizes that passive exposure to novel political dis/misinformation has ripple effects on trust, affective polarization, and participation-linked emotions even among those that disbelieve it. It thus tests both the implication that disinformation can encourage polarization and that disinformation can be used to impact multiple potential outcomes at once. The second empirical paper tests the latter links in the disinformation-commitment problem chain. Building on a study that found U.S polarization decreases U.K. ally confidence (Myrick 2022), it argues that polarization uniquely increases chances of voluntary defection and does so not only due to government changeover risk but also weakened leader accountability. It employs a causal mediation analysis on survey experiment data to test whether a potential partner’s polarization increases their perceived unreliability and in turn decreases public cooperation preference. The commitment problem implication receives mixed support. The first experiment evidences no impact of partisan mis/disinformation on affective polarization, though that may be due to floor effects. The second experiment finds that polarization modestly increases perceived defection risk, but this increase is not necessarily strong enough to change public cooperation preference. Beyond those findings, the first experiment also uncovers that polarization may indeed have sociopolitical impacts on even those that disbelieve it, consistent with the multiple-outcomes implication.
ContributorsCantrell, Michal (Author) / Peterson, Timothy (Thesis advisor) / Neuner, Fabian (Thesis advisor) / Kubiak, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2024