Matching Items (3)
Filtering by

Clear all filters

135911-Thumbnail Image.png
Description
Media influences the way people understand the world around them, and today's digital media environment is saturated with information. Online media consumers are experiencing an information overload, and many find it difficult to determine which messages to trust. Media consumers between the ages of 18 and 34 are increasingly turning

Media influences the way people understand the world around them, and today's digital media environment is saturated with information. Online media consumers are experiencing an information overload, and many find it difficult to determine which messages to trust. Media consumers between the ages of 18 and 34 are increasingly turning to social media, especially Facebook, for news and information. However, the nature of information exchange on these networks makes these users prone to seeing and sharing misleading, inaccurate or unverified information. This project is an examination of how misinformation spreads on social media platforms, and how users can utilize media literacy techniques to surround themselves with trustworthy information on social media, as well as develop skills to determine whether information is credible. By examining the motivations behind sharing information on social media, and the ways in which Millennials interact with misinformation on these platforms, this study aims to help users combat the spread of misleading information. This project determines techniques and resources that media consumers can use to turn their social media networks into healthy, trustworthy information environments. View the online component of this project at http://lindsaytaylorrobin.wix.com/info-overload
ContributorsRobinson, Lindsay T (Author) / Gillmor, Dan (Thesis director) / Roschke, Kristy (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
148479-Thumbnail Image.png
Description

In the past year, considerable misinformation about the COVID-19 pandemic has circulated on social media platforms. Faced with this pervasive issue, it is important to identify the extent to which people are able to spot misinformation on social media and ways to improve people’s accuracy in spotting misinformation. Therefore, the

In the past year, considerable misinformation about the COVID-19 pandemic has circulated on social media platforms. Faced with this pervasive issue, it is important to identify the extent to which people are able to spot misinformation on social media and ways to improve people’s accuracy in spotting misinformation. Therefore, the current study aims to investigate people’s accuracy in spotting misinformation, the effectiveness of a game-based intervention, and the role of political affiliation in spotting misinformation. In this study, 235 participants played a misinformation game in which they evaluated COVID-19-related tweets and indicated whether or not they thought each of the tweets contained misinformation. Misinformation accuracy was measured using game scores, which were based on the correct identification of misinformation. Findings revealed that participants’ beliefs about how accurate they are at spotting misinformation about COVID-19 did not predict their actual accuracy. Participants’ accuracy improved after playing the game, but democrats were more likely to improve than republicans.

ContributorsKang, Rachael (Author) / Kwan, Virginia (Thesis director) / Corbin, William (Committee member) / Cohen, Adam (Committee member) / Bunker, Cameron (Committee member) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
166185-Thumbnail Image.png
Description
Increasing misinformation in social media channels has become more prevalent since the beginning of the COVID-19 pandemic as countless myths and rumors have circulated over the internet. This misinformation has potentially lethal consequences as many people make important health decisions based on what they read online, thus creating an urgent

Increasing misinformation in social media channels has become more prevalent since the beginning of the COVID-19 pandemic as countless myths and rumors have circulated over the internet. This misinformation has potentially lethal consequences as many people make important health decisions based on what they read online, thus creating an urgent need to combat it. Although many Natural Language Processing (NLP) techniques have been used to identify misinformation in text, prompt-based methods are under-studied for this task. This work explores prompt learning to classify COVID-19 related misinformation. To this extent, I analyze the effectiveness of this proposed approach on four datasets. Experimental results show that prompt-based classification achieves on average ~13% and ~6% improvement compared to a single-task and multi-task model, respectively. Moreover, analysis shows that prompt-based models can achieve competitive results compared to baselines in a few-shot learning scenario.
ContributorsBrown, Clinton (Author) / Baral, Chitta (Thesis director) / Walker, Shawn (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05