Filtering by
- Status: Published
The basis of social power which has expanded in the most dangerous way over the last few decades has been that of information control, and how that control is used. Misinformation and the intentional spread of misinformation referred to as disinformation, has become commonplace among various bodies of power to either expand their own influence or diminish opposing influence. The methods of disinformation utilized in the various spheres of politics, the commercial marketplace, and the media today are explored in depth to better contextualize and describe the problems that disinformation and its use pose in the world today.
Disinformation has long been a tactic used by the Russian government to achieve its goals. Today, Vladimir Putin aims to achieve several things: weaken the United States’ strength on the world stage, relieve Western sanctions on himself and his inner circle, and reassert dominant influence over Russia’s near abroad (the Baltics, Ukraine, etc.). This research analyzed disinformation in English, Spanish, and Russian; noting the dominant narratives and geopolitical goals Russia hoped to achieve by destabilizing democracy in each country/region.
This paper examines the issue of Russian disinformation in Estonia and how the country has built resilience against this threat. Drawing upon existing literature and a series of interviews conducted with Estonians of a variety of professional backgrounds, this work explores Estonia's whole-of-society approach to resilience and examines its incorporation of national security strategy, inter-institutional cooperation, and media literacy education. Ultimately, this paper argues that Estonia's efforts have been largely successful in enabling the country to strengthen its society against Russian disinformation and offers key takeaways for other countries such as the United States.
Social media data opens the door to interdisciplinary research and allows one to collectively study large-scale human behaviors otherwise impossible. For example, user engagements over information such as news articles, including posting about, commenting on, or recommending the news on social media, contain abundant rich information. Since social media data is big, incomplete, noisy, unstructured, with abundant social relations, solely relying on user engagements can be sensitive to noisy user feedback. To alleviate the problem of limited labeled data, it is important to combine contents and this new (but weak) type of information as supervision signals, i.e., weak social supervision, to advance fake news detection.
The goal of this dissertation is to understand disinformation by proposing and exploiting weak social supervision for learning with little labeled data and effectively detect disinformation via innovative research and novel computational methods. In particular, I investigate learning with weak social supervision for understanding disinformation with the following computational tasks: bringing the heterogeneous social context as auxiliary information for effective fake news detection; discovering explanations of fake news from social media for explainable fake news detection; modeling multi-source of weak social supervision for early fake news detection; and transferring knowledge across domains with adversarial machine learning for cross-domain fake news detection. The findings of the dissertation significantly expand the boundaries of disinformation research and establish a novel paradigm of learning with weak social supervision that has important implications in broad applications in social media.
This thesis looks at recent and historical examples of mis/disinformation and discovers that there are many psychological factors contributing to why people get fooled by deceptive media throughout history, and in modern times, deception is amplified by social media, a platform designed to prioritize profits and user engagement over content moderation. The thesis then proposes a process flow for an app to teach any kind of person how to evaluate news sources.
This thesis looks at recent and historical examples of mis/disinformation and discovers that there are many psychological factors contributing to why people get fooled by deceptive media throughout history, and in modern times, deception is amplified by social media, a platform designed to prioritize profits and user engagement over content moderation. The thesis then proposes a process flow for an app to teach any kind of person how to evaluate news sources.