Tess Neal is an Assistant Professor of Psychology in the ASU New College of Interdisciplinary Arts and Sciences and is a founding faculty member of the [Program on Law and Behavioral Science](http://lawpsych.asu.edu/). Dr. Neal has published one edited book and more than three dozen peer-reviewed publications in such journals as PLOS ONE; American Psychologist; Psychology, Public Policy, and Law; and Criminal Justice and Behavior. Neal is the recipient of the 2016 Saleem Shah Award for Early Career Excellence in Psychology and Law, co-awarded by the American Psychology-Law Society and the American Academy of Forensic Psychology. She was named a 2016 "Rising Star" by the Association for Psychological Science, a designation that recognizes outstanding psychological scientists in the earliest stages of their research career post-PhD "whose innovative work has already advanced the field and signals great potential for their continued contributions." She directs the ASU [Clinical and Legal Judgment Lab](http://psych-law.lab.asu.edu).
Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.
Forensic Psychology and Correctional Psychology: Distinct but Related Subfields of Psychological Science and Practice
This paper delineates two separate but related subfields of psychological science and practice applicable across all major areas of the field (e.g., clinical, counseling, developmental, social, cognitive, community). Forensic and correctional psychology are related by their historical roots, involvement in the justice system, and the shared population of people they study and serve. The practical and ethical contexts of these subfields is distinct from other areas of psychology – and from one another – with important implications for ecologically valid research and ethically sound practice. Forensic psychology is a subfield of psychology in which basic and applied psychological science or scientifically-oriented professional practice is applied to the law to help resolve legal, contractual, or administrative matters. Correctional psychology is a subfield of psychology in which basic and applied psychological science or scientifically-oriented professional practice is applied to the justice system to inform the classification, treatment, and management of offenders to reduce risk and improve public safety. There has been and continues to be great interest in both subfields – especially the potential for forensic and correctional psychological science to help resolve practical issues and questions in legal and justice settings. This paper traces the shared and separate developmental histories of these subfields, outlines their important distinctions and implications, and provides a common understanding and shared language for psychologists interested in applying their knowledge in forensic or correctional contexts.
This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report-writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data was used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.
Validity, Inter-Rater Reliability, and Measures of Adaptive Behavior: Concerns Regarding the Probative Versus Prejudicial Value
The question as to whether the assessment of adaptive behavior (AB) for evaluations of intellectual disability (ID) in the community meet the level of rigor necessary for admissibility in legal cases is addressed. Adaptive behavior measures have made their way into the forensic domain where scientific evidence is put under great scrutiny. Assessment of ID in capital murder proceedings has garnished a lot of attention, but assessments of ID in adult populations also occur with some frequency in the context of other criminal proceedings (e.g., competence to stand trial; competence to waive Miranda rights), as well as eligibility for social security disability, social security insurance, Medicaid/Medicare, government housing, and post-secondary transition services. As will be demonstrated, markedly disparate findings between raters can occur on measures of AB even when the assessment is conducted in accordance with standard procedures (i.e., the person was assessed in a community setting, in real time, with multiple appropriate raters, when the person was younger than 18 years of age) and similar disparities can be found in the context of the unorthodox and untested retrospective assessment used in capital proceedings. With full recognition that some level of disparity is to be expected, the level of disparity that can arise when these measures are administered retrospectively calls into question the validity of the results and consequently, their probative value.
People who testify as expert witnesses in court are often fearful of blundering, feeling inept, and being “caught out” during cross-examinations. There are several reasons for lapses in professional demeanor and responses while testifying. We offer seven baits or temptations that can draw an expert into behaviors that are unbecoming, with examples of responses that are inappropriate and harmful. These seven bait and lures are accompanied by descriptions of how to handle them.
We investigated the role of moral disengagement in a legally‐relevant judgment in this theoretically‐driven empirical analysis. Moral disengagement is a social‐cognitive phenomenon through which people reason their way toward harming others, presenting a useful framework for investigating legal judgments that often result in harming individuals for the good of society. We tested the role of moral disengagement in forensic psychologists’ willingness to conduct the most ethically questionable clinical task in the criminal justice system: competence for execution evaluations. Our hypothesis that moral disengagement would function as mediator of participants’ existing attitudes and their judgments—a theoretical “bridge” between attitudes and judgments—was robustly supported. Moral disengagement was key to understanding how psychologists decide to engage in competence for execution evaluations. We describe in detail the moral disengagement measure we used, including exploratory and confirmatory factor analyses across two separate samples. The four‐factor measure accounted for a total of 52.18 percent of the variance in the sample of forensic psychologists, and the model adequately fit the data in the entirely different sample of jurors in a confirmatory factor analysis. Despite the psychometric strengths of this moral disengagement measure, we describe the pros and cons of existing measures of moral disengagement. We outline future directions for moral disengagement research, especially in legal contexts.
The essential tasks for an expert witness are to be prepared, to be effective and credible on the stand, and to manage well the demands of cross-examinations. Most novice experts are excessively anxious about their testimony. Effective experts are well-oriented to the legal and scientific context of court testimony. This chapter reviews research-backed tips for preparing for expert testimony.
The majority of trust research has focused on the benefits trust can have for individual actors, institutions, and organizations. This “optimistic bias” is particularly evident in work focused on institutional trust, where concepts such as procedural justice, shared values, and moral responsibility have gained prominence. But trust in institutions may not be exclusively good. We reveal implications for the “dark side” of institutional trust by reviewing relevant theories and empirical research that can contribute to a more holistic understanding. We frame our discussion by suggesting there may be a “Goldilocks principle” of institutional trust, where trust that is too low (typically the focus) or too high (not usually considered by trust researchers) may be problematic. The chapter focuses on the issue of too-high trust and processes through which such too-high trust might emerge. Specifically, excessive trust might result from external, internal, and intersecting external-internal processes. External processes refer to the actions institutions take that affect public trust, while internal processes refer to intrapersonal factors affecting a trustor’s level of trust. We describe how the beneficial psychological and behavioral outcomes of trust can be mitigated or circumvented through these processes and highlight the implications of a “darkest” side of trust when they intersect. We draw upon research on organizations and legal, governmental, and political systems to demonstrate the dark side of trust in different contexts. The conclusion outlines directions for future research and encourages researchers to consider the ethical nuances of studying how to increase institutional trust.
The purpose of this volume is to consider how trust research, particularly trust in institutions, might benefit from increased inter- or transdisciplinarity. In this introductory chapter, we first give some background on prior disciplinary, multidisciplinary, and interdisciplinary work relating to trust. Next, we describe how this many-disciplined volume on institutional trust emerged from the joint activities of the Nebraska Symposium on Motivation and a National Science Foundation-funded Workshop on institutional trust. This chapter describes some of the themes that emerged, while also providing an overview of the rest of the volume, which includes chapters that discuss conceptualizations, definitions, and measurement of trust; institutional trust across domains and contexts; and theoretical advances regarding the “dark” and “light” sides of institutional trust. Finally, we conclude with some thoughts about the future of and potential promises and pitfalls of trust as a focus of interdisciplinary study.
Examinations of trust have advanced steadily over the past several decades, yielding important insights within criminal justice, economics, environmental studies, management and industrial organization, psychology, political science, and sociology. Cross-disciplinary approaches to the study of trust, however, have been limited by differences in defining and measuring trust and in methodological approaches. In this chapter, we take the position that: 1) cross-disciplinary studies can be improved by recognizing trust as a multilevel phenomenon, and 2) context impacts the nature of trusting relations. We present an organizing framework for conceptualizing trust between trustees and trustors at person, group, and institution levels. The differences between these levels have theoretical implications for the study of trust and that might be used to justify distinctions in definitions and methodological approaches across settings. We highlight where the levels overlap and describe how this overlap has created confusion in the trust literature to date. Part of the overlap – and confusion – is the role of interpersonal trust at each level. We delineate when and how interpersonal trust is theoretically relevant to conceptualizing and measuring trust at each level and suggest that other trust-related constructs, such as perceived legitimacy, competence, and integrity, may be more important than interpersonal trust at some levels and in some contexts. Translating findings from trust research in one discipline to another and collaborating across disciplines may be facilitated if researchers ensure that their levels of conceptualization and measurement are aligned, and that models developed for a particular context are relevant in other, distinct contexts.