Matching Items (3)
Filtering by

Clear all filters

157057-Thumbnail Image.png
Description
The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information.

The pervasive use of social media gives it a crucial role in helping the public perceive reliable information. Meanwhile, the openness and timeliness of social networking sites also allow for the rapid creation and dissemination of misinformation. It becomes increasingly difficult for online users to find accurate and trustworthy information. As witnessed in recent incidents of misinformation, it escalates quickly and can impact social media users with undesirable consequences and wreak havoc instantaneously. Different from some existing research in psychology and social sciences about misinformation, social media platforms pose unprecedented challenges for misinformation detection. First, intentional spreaders of misinformation will actively disguise themselves. Second, content of misinformation may be manipulated to avoid being detected, while abundant contextual information may play a vital role in detecting it. Third, not only accuracy, earliness of a detection method is also important in containing misinformation from being viral. Fourth, social media platforms have been used as a fundamental data source for various disciplines, and these research may have been conducted in the presence of misinformation. To tackle the challenges, we focus on developing machine learning algorithms that are robust to adversarial manipulation and data scarcity.

The main objective of this dissertation is to provide a systematic study of misinformation detection in social media. To tackle the challenges of adversarial attacks, I propose adaptive detection algorithms to deal with the active manipulations of misinformation spreaders via content and networks. To facilitate content-based approaches, I analyze the contextual data of misinformation and propose to incorporate the specific contextual patterns of misinformation into a principled detection framework. Considering its rapidly growing nature, I study how misinformation can be detected at an early stage. In particular, I focus on the challenge of data scarcity and propose a novel framework to enable historical data to be utilized for emerging incidents that are seemingly irrelevant. With misinformation being viral, applications that rely on social media data face the challenge of corrupted data. To this end, I present robust statistical relational learning and personalization algorithms to minimize the negative effect of misinformation.
ContributorsWu, Liang (Author) / Liu, Huan (Thesis advisor) / Tong, Hanghang (Committee member) / Doupe, Adam (Committee member) / Davison, Brian D. (Committee member) / Arizona State University (Publisher)
Created2019
148094-Thumbnail Image.png
Description

Americans today face an age of information overload. With the evolution of Media 3.0, the internet, and the rise of Media 3.5—i.e., social media—relatively new communication technologies present pressing challenges for the First Amendment in American society. Twentieth century law defined freedom of expression, but in an information-limited world. By

Americans today face an age of information overload. With the evolution of Media 3.0, the internet, and the rise of Media 3.5—i.e., social media—relatively new communication technologies present pressing challenges for the First Amendment in American society. Twentieth century law defined freedom of expression, but in an information-limited world. By contrast, the twenty-first century is seeing the emergence of a world that is overloaded with information, largely shaped by an “unintentional press”—social media. Americans today rely on just a small concentration of private technology powerhouses exercising both economic and social influence over American society. This raises questions about censorship, access, and misinformation. While the First Amendment protects speech from government censorship only, First Amendment ideology is largely ingrained across American culture, including on social media. Technological advances arguably have made entry into the marketplace of ideas—a fundamental First Amendment doctrine—more accessible, but also more problematic for the average American, increasing his/her potential exposure to misinformation. <br/><br/>This thesis uses political and judicial frameworks to evaluate modern misinformation trends, social media platforms and current misinformation efforts, against the background of two misinformation accelerants in 2020, the COVID-19 pandemic and U.S. presidential election. Throughout history, times of hardship and intense fear have contributed to the shaping of First Amendment jurisprudence. Thus, this thesis looks at how fear can intensify the spread of misinformation and influence free speech values. Extensive research was conducted to provide the historical context behind relevant modern literature. This thesis then concludes with three solutions to misinformation that are supported by critical American free speech theory.

ContributorsCochrane, Kylie Marie (Author) / Russomanno, Joseph (Thesis director) / Roschke, Kristy (Committee member) / School of Public Affairs (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor, Contributor) / Watts College of Public Service & Community Solut (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132242-Thumbnail Image.png
Description
America has been widely considered a great democratic experiment, which is a characterization attributed to Thomas Jefferson. An experiment can be designed to use trial-and-error methods to find a certain outcome. While not a conscious effort, the United States has experienced a trial-and-error experimental process in developing legislation that will

America has been widely considered a great democratic experiment, which is a characterization attributed to Thomas Jefferson. An experiment can be designed to use trial-and-error methods to find a certain outcome. While not a conscious effort, the United States has experienced a trial-and-error experimental process in developing legislation that will restrict dangerous misinformation without violating the speech and press clauses of the First Amendment. In several of his personal writings and official speeches, Jefferson advised against additional government intervention with regard to filtering true and false information published by the press or distributed by citizens. His argument is a guiding theme throughout this thesis, which explores that experimental process and its relation to contemporary efforts to address and prevent future phenomena like the fake news outbreak of 2016.
This thesis utilizes an examination of examples of laws designed to control misinformation, past and present, then using those examples to provide context to both arguments in favor of and opposing new misinformation laws. Extensive archival research was conducted to ensure that accurate historical reflection could be included in offering information about historical examples, as well as through application of relevant literature. The possible effects on the electorate and the practices of the press by those laws of the past and potential proposals for new legislation are also discussed in an effort to provide further context to, and support for, the conclusions reached. Those conclusions include that additional regulation is necessary to discourage the creation and distribution of fake news and misinformation in order to protect the public from the violence or imminent unlawful action they may cause.
Created2019-05