Filtering by
- All Subjects: psychology
- All Subjects: Law
- All Subjects: ethics
- Creators: Neal, Tess M.S.
realistic? The current study investigated the role of implicit and explicit social-cognitive biases in jurors’ conceptualizations of insanity, and the influence of those biases in juror verdict decisions. It was hypothesized that by analyzing the role of implicit and explicit biases in insanity defense cases, jurors’ attitudes towards those with mental illnesses and attitudes towards the insanity defense would influence jurors’ final verdict decisions. Two hundred and two participants completed an online survey which included a trial vignette incorporating an insanity defense (adapted from Maeder et al., 2016), the Insanity Defense Attitude Scale (Skeem, Louden, & Evans, 2004), Community Attitudes Towards the Mentally Ill Scale (Taylor & Dear, 1981), and an Implicit Association Test (Greenwald et al., 1998). While implicit associations concerning mental illness and dangerousness were significantly related to mock jurors’ verdicts, they no longer were when explicit insanity defense attitudes were added to a more complex model including all measured attitudes and biases. Insanity defense attitudes were significantly related to jurors’ verdicts over and above attitudes about the mentally ill and implicit biases concerning the mentally ill. The potentially biasing impact of jurors’ insanity defense attitudes and the impact of implicit associations about the mentally ill in legal judgments are discussed.
The Sixth Amendment guarantees defendants the right to trial by an impartial jury. Attorneys are expected to obtain information about potential juror biases and then deselect biased jurors. Social networking sites may offer useful information about potential jurors. Although some attorneys and trial consultants have begun searching online sources for information about jurors, the privacy rights of potential jurors’ online content has yet to be defined by case law. Two studies explored the issue of possible intrusion into juror privacy. First, an active jury venire was searched for online content. Information was found for 36% of the jurors; however, 94% of the information was found through simple Google searches. Only 6% of the information we found was unique to other sites. We concluded that searching for potential jurors online is feasible, but that systematically searching sites other than Google is generally not an effective search strategy. In our second study we surveyed attorneys, trial consultants, law students, and undergraduate students about ethical and privacy issues in the use of public domain information for jury selection. Participants evidenced concern about the rights of jurors, the rights of the defendant and accuser, and the role of tradition in court processes.
There is substantial controversy over the extent to which social science should be used in jury selection. Underlying the debate are two competing interests in the make-up of a jury: a privilege to strike prospective jurors on subjective grounds, which supports scientific jury selection, and a collective interest of citizens to be free from exclusion from jury service, which does not. While the incommensurability of the interests precludes resolution of the controversy in the abstract, specific solutions are possible. Using the example of selection of jurors based upon their respective levels of extraversion, we describe how the competing interests frequently do not apply to concrete cases. In the subsequent analysis, we show that, rhetoric notwithstanding, a normative preference for adhering to tradition and institutional inertia are the primary instrumental considerations for determining whether peremptory challenges based upon personality traits like extraversion ought to be allowed. Consistent with this analysis, we conclude that the practice of striking jurors based upon estimates of such personality traits is appropriate.
Prompted by the involvement of psychologists in torturous interrogations at Guantanamo and Abu Ghraib, the American Psychological Association (APA) revised its Ethics Code Standard 1.02 to prohibit psychologists from engaging in activities that would “justify or defend violating human rights.” The revision to Standard 1.02 followed APA policy statements condemning torture and prohibiting psychologists’ involvement in such activities that constitute a violation of human rights (APA, 2010). Cogent questions have subsequently been raised about the involvement of psychologists in other activities that could arguably lead to human rights violations, even if the activity in question is legal. While this language was designed to be expansive in defining psychologists’ ethical responsibilities, it remains difficult to determine whether and how Standard 1.02 might apply to a particular situation.
In the present analysis, we focus on the question of whether psychologists should be involved in death penalty cases. We assert that the APA should not take an ethical stand against psychologists’ participation in death penalty cases. Our position is not intended necessarily to reflect approval or disapproval of the death penalty although we recognize that there are serious flaws in the American legal system with regard to capital punishment. Our perspective is that psychologists have an important role in the administration of due process in capital cases. We oppose a bright-line rule prohibiting psychologists’ involvement in death penalty cases for several reasons. We begin by considering whether the death penalty per se constitutes a human rights violation, move on to describe the basic functioning of the legal system, analyze how the involvement of psychologists actually affects the capital trial process, and end with providing practical advice for psychologists’ provision of ethical services in capital trials.
The ethics of forensic professionalism is often couched in terms of competing individual and societal values. Indeed, the welfare of individuals is often secondary to the requirements of society, especially given the public nature of courts of law, forensic hospitals, jails, and prisons. We explore the weaknesses of this dichotomous approach to forensic ethics, offering an analysis of Psychology's historical narrative especially relevant to the national security and correctional settings. We contend that a richer, more robust ethical analysis is available if practitioners consider the multiple perspectives in the forensic encounter, and acknowledge the multiple influences of personal, professional, and social values. The setting, context, or role is not sufficient to determine the ethics of forensic practice.
This survey of 206 forensic psychologists tested the “filtering” effects of preexisting expert attitudes in adversarial proceedings. Results confirmed the hypothesis that evaluator attitudes toward capital punishment influence willingness to accept capital case referrals from particular adversarial parties. Stronger death penalty opposition was associated with higher willingness to conduct evaluations for the defense and higher likelihood of rejecting referrals from all sources Conversely, stronger support was associated with higher willingness to be involved in capital cases generally, regardless of referral source. The findings raise the specter of skewed evaluator involvement in capital evaluations, where evaluators willing to do capital casework may have stronger capital punishment support than evaluators who opt out, and evaluators with strong opposition may work selectively for the defense. The results may provide a partial explanation for the “allegiance effect” in adversarial legal settings such that preexisting attitudes may contribute to partisan participation through a self-selection process.
Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.
This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report-writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data was used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.