Filtering by
- Creators: Arizona State University
realistic? The current study investigated the role of implicit and explicit social-cognitive biases in jurors’ conceptualizations of insanity, and the influence of those biases in juror verdict decisions. It was hypothesized that by analyzing the role of implicit and explicit biases in insanity defense cases, jurors’ attitudes towards those with mental illnesses and attitudes towards the insanity defense would influence jurors’ final verdict decisions. Two hundred and two participants completed an online survey which included a trial vignette incorporating an insanity defense (adapted from Maeder et al., 2016), the Insanity Defense Attitude Scale (Skeem, Louden, & Evans, 2004), Community Attitudes Towards the Mentally Ill Scale (Taylor & Dear, 1981), and an Implicit Association Test (Greenwald et al., 1998). While implicit associations concerning mental illness and dangerousness were significantly related to mock jurors’ verdicts, they no longer were when explicit insanity defense attitudes were added to a more complex model including all measured attitudes and biases. Insanity defense attitudes were significantly related to jurors’ verdicts over and above attitudes about the mentally ill and implicit biases concerning the mentally ill. The potentially biasing impact of jurors’ insanity defense attitudes and the impact of implicit associations about the mentally ill in legal judgments are discussed.
Psychological assessments contain important diagnostic information and are central to therapeutic service delivery. Therapists' personal biases, invalid cognitive schemas, and emotional reactions can be expressed in the language of the assessments they compose, causing clients to be cast in an unfavorable light. Logically, the opinions of subsequent therapists may then be influenced by reading these assessments, resulting in negative attitudes toward clients, inaccurate diagnoses, adverse experiences for clients, and poor therapeutic outcomes. However, little current research exists that addresses this issue. This study analyzed the degree to which strength-based, deficit-based, and neutral language used in psychological assessments influenced the opinions of counselor trainees (N= 116). It was hypothesized that participants assigned to each type of assessment would describe the client using adjectives that closely conformed to the language used in the assessment they received. The hypothesis was confirmed (p = .000), indicating significant mean differences between all three groups. Limitations and implications of the study were identified and suggestions for further research were discussed.
Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect.
The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them.
The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study.
This chapter integrates from cognitive neuroscience, cognitive psychology, and social psychology the basic science of bias in human judgment as relevant to judgments and decisions by forensic mental health professionals. Forensic mental health professionals help courts make decisions in cases when some question of psychology pertains to the legal issue, such as in insanity cases, child custody hearings, and psychological injuries in civil suits. The legal system itself and many people involved, such as jurors, assume mental health experts are “objective” and untainted by bias. However, basic psychological science from several branches of the discipline suggest the law’s assumption about experts’ protection from bias is wrong. Indeed, several empirical studies now show clear evidence of (unintentional) bias in forensic mental health experts’ judgments and decisions. In this chapter, we explain the science of how and why human judgments are susceptible to various kinds of bias. We describe dual-process theories from cognitive neuroscience, cognitive psychology, and social psychology that can help explain these biases. We review the empirical evidence to date specifically about cognitive and social psychological biases in forensic mental health judgments, weaving in related literature about biases in other types of expert judgment, with hypotheses about how forensic experts are likely affected by these biases. We close with a discussion of directions for future research and practice.