Tess Neal is an Assistant Professor of Psychology in the ASU New College of Interdisciplinary Arts and Sciences and is a founding faculty member of the [Program on Law and Behavioral Science](http://lawpsych.asu.edu/). Dr. Neal has published one edited book and more than three dozen peer-reviewed publications in such journals as PLOS ONE; American Psychologist; Psychology, Public Policy, and Law; and Criminal Justice and Behavior. Neal is the recipient of the 2016 Saleem Shah Award for Early Career Excellence in Psychology and Law, co-awarded by the American Psychology-Law Society and the American Academy of Forensic Psychology. She was named a 2016 "Rising Star" by the Association for Psychological Science, a designation that recognizes outstanding psychological scientists in the earliest stages of their research career post-PhD "whose innovative work has already advanced the field and signals great potential for their continued contributions." She directs the ASU [Clinical and Legal Judgment Lab](http://psych-law.lab.asu.edu).
The 64-item Hare Self-Report Psychopathy Scale (Hare SRP; Paulhus, Neumann, & Hare, in press) is the most recent revision of the SRP, which has undergone numerous iterations. Little research has been conducted with this new edition; therefore, the goal of the current study was to elucidate the factor structure as well as the criterion-related, convergent, and discriminant validity of the measure in a large sample of college students (N=602). Confirmatory factor analyses revealed that the best-fitting model was the original four-factor model proposed by the authors of the Hare SRP (compared to a one-factor, two-factor, and four-factor random model). The four-factor model revealed superior fit for the data relative to the other alternative models. In addition, we elaborated on the psychometric properties of this four-factor model in this sample. The Hare SRP total and factor scores evidenced good internal reliability as well as promising criterion-related, convergent, and discriminant validity in terms of predicting scores on conceptually relevant external criteria. Implications for theory and future research are discussed.
Two experiments examined how mock jurors’ beliefs about three factors known to influence eyewitness memory accuracy relate to decision-making (age of eyewitness and presence of weapon in Study 1, length of eyewitness identification decision time in Study 2). Psychology undergraduates rendered verdicts and evaluated trial participants after reading a robbery-murder trial summary that varied eyewitness age (6, 11, 42, or 74 years) and weapon presence (visible or not) in Study 1 and eyewitness decision length (2-3 or 30 seconds) in Study 2 (n=200 each). The interactions between participant belief about these variables and the manipulated variables themselves were the heart of this study. Participants’ beliefs about eyewitness age and weapon presence interacted with these manipulations, but only for some judgments – verdict for eyewitness age and eyewitness credibility for weapon focus. The exploratory meditational analyses found only one relation: juror belief about eyewitness age mediated the relation between eyewitness age and credibility ratings. These results highlight a need for juror education and specialized voir dire in cases where legitimate concerns exist regarding the reliability of eyewitness memory (e.g., child eyewitness, weapon presence during event, long eyewitness identification time). If erroneous juror beliefs can be corrected their impact may be reduced.
This study examined how manipulations of likeability and knowledge affected mock jurors’ perceptions of female and male expert witness credibility (N=290). Our findings extend the person perception literature by demonstrating how warmth and competence overlap with existing conceptions of likeability and credibility in the psycholegal domain. We found experts high in likeability and/or knowledge were perceived equally positively regardless of gender in a death penalty sentencing context. Gender differences emerged when the expert was low in likeability and/or knowledge; in these conditions the male expert was perceived more positively than the comparable female expert. Although intermediate judgments (e.g., perceptions of credibility) were affected by our manipulations, ultimate decisions (e.g., sentencing) were not. Implications for theory and practice are discussed.
Aside from an article by Gutheil, Bursztajn, Hilliard, and Brodsky (2004), scant literature exists regarding why forensic mental health professionals refuse or withdraw from cases. The current study collected descriptive information about the reasons mental health experts decline or withdraw from forensic assessments, both early and late in the legal process. In response to an online survey, 29 practicing forensic psychologists and psychiatrists presented examples of case withdrawal from their professional experiences. Their major reasons included ethical issues or conflicts, payment difficulties, and interpersonal or procedural problems with retaining counsel or evaluees. The results indicate that there are compelling personal and professional reasons that prompt forensic mental health experts to withdraw from or turn down cases.
The current study used the Trauma Symptom Checklist-40 (TSC-40) to index both childhood sexual abuse (CSA) and childhood physical abuse (CPA) in a college student sample of both men and women (N = 441). Although the TSC-40 was designed as a measure of CSA trauma, this study concludes the measure is appropriately reliable for indexing the traumatic sequelae of CPA as well as CSA in nonclinical samples. The current study also explored the effects of gender and abuse severity on resulting symptomatology, finding that women and severely abused individuals report the most negative sequelae. Both CSA and CPA emerged as significant explanatory variables in TSC-40 scale scores beyond gender, supporting its validity for indexing traumatic sequelae in nonclinical samples.
The Sixth Amendment guarantees defendants the right to trial by an impartial jury. Attorneys are expected to obtain information about potential juror biases and then deselect biased jurors. Social networking sites may offer useful information about potential jurors. Although some attorneys and trial consultants have begun searching online sources for information about jurors, the privacy rights of potential jurors’ online content has yet to be defined by case law. Two studies explored the issue of possible intrusion into juror privacy. First, an active jury venire was searched for online content. Information was found for 36% of the jurors; however, 94% of the information was found through simple Google searches. Only 6% of the information we found was unique to other sites. We concluded that searching for potential jurors online is feasible, but that systematically searching sites other than Google is generally not an effective search strategy. In our second study we surveyed attorneys, trial consultants, law students, and undergraduate students about ethical and privacy issues in the use of public domain information for jury selection. Participants evidenced concern about the rights of jurors, the rights of the defendant and accuser, and the role of tradition in court processes.
There is substantial controversy over the extent to which social science should be used in jury selection. Underlying the debate are two competing interests in the make-up of a jury: a privilege to strike prospective jurors on subjective grounds, which supports scientific jury selection, and a collective interest of citizens to be free from exclusion from jury service, which does not. While the incommensurability of the interests precludes resolution of the controversy in the abstract, specific solutions are possible. Using the example of selection of jurors based upon their respective levels of extraversion, we describe how the competing interests frequently do not apply to concrete cases. In the subsequent analysis, we show that, rhetoric notwithstanding, a normative preference for adhering to tradition and institutional inertia are the primary instrumental considerations for determining whether peremptory challenges based upon personality traits like extraversion ought to be allowed. Consistent with this analysis, we conclude that the practice of striking jurors based upon estimates of such personality traits is appropriate.
Despite advances in the scientific methodology of witness testimony research, no sound measure currently exists to evaluate perceptions of testimony skills. Drawing on self-efficacy and witness preparation research, the present study describes development of the Observed Witness Efficacy Scale (OWES). Factor analyses of a mock jury sample yielded a two-factor structure (Poise and Communication Style) consistent with previous research on witness self-ratings of testimony delivery skills. OWES subscales showed differential patterns of association with witness credibility, witness believability, agreement with the witness, and verdict decision. Juror gender moderated the impact of Communication Style, but not Poise, on belief of and agreement with the witness. Results are discussed with attention to application of the OWES to witness research and preparation training.
Prompted by the involvement of psychologists in torturous interrogations at Guantanamo and Abu Ghraib, the American Psychological Association (APA) revised its Ethics Code Standard 1.02 to prohibit psychologists from engaging in activities that would “justify or defend violating human rights.” The revision to Standard 1.02 followed APA policy statements condemning torture and prohibiting psychologists’ involvement in such activities that constitute a violation of human rights (APA, 2010). Cogent questions have subsequently been raised about the involvement of psychologists in other activities that could arguably lead to human rights violations, even if the activity in question is legal. While this language was designed to be expansive in defining psychologists’ ethical responsibilities, it remains difficult to determine whether and how Standard 1.02 might apply to a particular situation.
In the present analysis, we focus on the question of whether psychologists should be involved in death penalty cases. We assert that the APA should not take an ethical stand against psychologists’ participation in death penalty cases. Our position is not intended necessarily to reflect approval or disapproval of the death penalty although we recognize that there are serious flaws in the American legal system with regard to capital punishment. Our perspective is that psychologists have an important role in the administration of due process in capital cases. We oppose a bright-line rule prohibiting psychologists’ involvement in death penalty cases for several reasons. We begin by considering whether the death penalty per se constitutes a human rights violation, move on to describe the basic functioning of the legal system, analyze how the involvement of psychologists actually affects the capital trial process, and end with providing practical advice for psychologists’ provision of ethical services in capital trials.
This report integrated quantitative and qualitative methods across two studies to compile descriptive information about forensic psychologists’ occupational socialization processes. We also explored the relation between occupational socialization and forensic psychologists’ objectivity. After interviewing 20 board-certified forensic psychologists, we surveyed 334 forensic psychologists about their socialization into the field. Results indicated that the occupational socialization processes of forensic psychologists, including socialization about objectivity, varied widely across time and situation as the field has developed. Moreover, three hypotheses regarding occupational socialization were supported. It was positively and significantly associated with years of experience, t(284) = 3.63, p < 0.001, 95% CI = 0.05 – 0.16; belief in one’s ability to be objective, t(296) = 9.90, p < 0.001, 95% CI = 0.69 – 1.03; and endorsement of the usefulness of various bias correction strategies, r = 0.38 (p < .001, one-tailed). The implications of these results and directions for future research are discussed.