Filtering by
- Member of: Barrett, The Honors College Thesis/Creative Project Collection
- Member of: ASU Scholarship Showcase
I studied how hostile and benevolent language influences one’s ability to change their misconceptions. Participants were less likely to revise their misconceptions when reading tweets with hostile language than those exposed to benevolent language, which stresses adopting a neutral or benevolent tone to increase the likelihood of successful revision. This may be due to a shift of memory resources from the less engaging Tweet information to the more engaging, evocative hostile language.
Misconceptions about mental health can have negative effects on therapy, education, and social interactions. Misconceptions about mental health can be formed through misinformation being spread online from a variety of sources. The current study manipulates and examines the effects of social media users’ justification for knowing on participants’ perceived credibility and knowledge revision. Justification for evidence was manipulated within subjects. There were 3 types of justifications: personal experience, professional experience, or no justification. To test the effects of evidence justification, we used two dependent variables: perceived credibility and knowledge revision. MTurk participants (n = 111) completed pretest assessments regarding mental health and general science knowledge. They then read 11 experimenter-derived Twitter threads, each containing a misconception, two tweets with a refutation, and a statement of justification for the refutation. After each Twitter thread, participants were asked to rate the perceived credibility of the refutation texts. Participants were later given a posttest to measure knowledge revision as well as a series of questions that measured epistemic belief systems. We hypothesized that participants would be more likely to revise their misconceptions when the justification was personal expertise compared to when the justification was professional expertise or no justification is given. The findings did not support these hypotheses, instead indicating that the highest perceived credibility rankings came from professional expertise while knowledge revision occurred in all conditions.
Health systems are heavily promoting patient portals. However, limited health literacy (HL) can restrict online communication via secure messaging (SM) because patients’ literacy skills must be sufficient to convey and comprehend content while clinicians must encourage and elicit communication from patients and match patients’ literacy level. This paper describes the Employing Computational Linguistics to Improve Patient-Provider Secure Email (ECLIPPSE) study, an interdisciplinary effort bringing together scientists in communication, computational linguistics, and health services to employ computational linguistic methods to (1) create a novel Linguistic Complexity Profile (LCP) to characterize communications of patients and clinicians and demonstrate its validity and (2) examine whether providers accommodate communication needs of patients with limited HL by tailoring their SM responses. We will study >5 million SMs generated by >150,000 ethnically diverse type 2 diabetes patients and >9000 clinicians from two settings: an integrated delivery system and a public (safety net) system. Finally, we will then create an LCP-based automated aid that delivers real-time feedback to clinicians to reduce the linguistic complexity of their SMs. This research will support health systems’ journeys to become health literate healthcare organizations and reduce HL-related disparities in diabetes care.