Tess Neal is an Assistant Professor of Psychology in the ASU New College of Interdisciplinary Arts and Sciences and is a founding faculty member of the [Program on Law and Behavioral Science](http://lawpsych.asu.edu/). Dr. Neal has published one edited book and more than three dozen peer-reviewed publications in such journals as PLOS ONE; American Psychologist; Psychology, Public Policy, and Law; and Criminal Justice and Behavior. Neal is the recipient of the 2016 Saleem Shah Award for Early Career Excellence in Psychology and Law, co-awarded by the American Psychology-Law Society and the American Academy of Forensic Psychology. She was named a 2016 "Rising Star" by the Association for Psychological Science, a designation that recognizes outstanding psychological scientists in the earliest stages of their research career post-PhD "whose innovative work has already advanced the field and signals great potential for their continued contributions." She directs the ASU [Clinical and Legal Judgment Lab](http://psych-law.lab.asu.edu).

Displaying 1 - 2 of 2
Filtering by

Clear all filters

141344-Thumbnail Image.png
Description

The knowledge of experts presumably affects their credibility and the degree to which the trier of fact will agree with them. However, specific effects of demonstrated knowledge are largely unknown. This experiment manipulated a forensic expert’s level of knowledge in a mock trial paradigm. We tested the relation between low

The knowledge of experts presumably affects their credibility and the degree to which the trier of fact will agree with them. However, specific effects of demonstrated knowledge are largely unknown. This experiment manipulated a forensic expert’s level of knowledge in a mock trial paradigm. We tested the relation between low versus high expert knowledge on mock juror perceptions of expert credibility, on agreement with the expert, and on sentencing. We also tested expert gender as a potential moderator. Knowledge effects were statistically significant; however, these differences carried little practical utility in predicting mock jurors’ ultimate decisions. Contrary to hypotheses that high knowledge would yield increased credibility and agreement, knowledge manipulations only influenced perceived expert likeability. The low knowledge expert was perceived as more likeable than his or her high knowledge counterpart, a paradoxical finding. No significant differences across expert gender were found. Implications for conceptualizing expert witness knowledge, credibility, and their potential effects on juror decision-making are discussed.

ContributorsParrott, Caroline Titcomb (Author) / Neal, Tess M.S. (Author) / Wilson, Jennifer K. (Author) / Brodsky, Stanley L. (Author)
Created2015-03
141345-Thumbnail Image.png
Description

We used archival data to examine the predictive validity of a pre-release violence risk assessment battery over six years at a forensic hospital (N=230, 100% male, 63.0% African-American, 34.3% Caucasian). Examining “real world” forensic decision-making is important for illuminating potential areas for improvement. The battery included the Historical-Clinical-Risk Management-20, Psychopathy

We used archival data to examine the predictive validity of a pre-release violence risk assessment battery over six years at a forensic hospital (N=230, 100% male, 63.0% African-American, 34.3% Caucasian). Examining “real world” forensic decision-making is important for illuminating potential areas for improvement. The battery included the Historical-Clinical-Risk Management-20, Psychopathy Checklist-Revised, Schedule of Imagined Violence, and Novaco Anger Scale and Provocation Inventory. Three outcome “recidivism” variables included contact violence, contact & threatened violence, and any reason for hospital return. Results indicated measures of general violence risk and psychopathy were highly correlated but weakly associated with reports of imagined violence and a measure of anger. Measures of imagined violence and anger were correlated with one another. Receiver Operating Characteristic curve analyses revealed, unexpectedly, that none of the scales or subscales predicted recidivism better than chance. Multiple regression indicated the battery failed to account for recidivism outcomes. We conclude by discussing three possible explanations, including timing of assessments, controlled versus field studies, and recidivism base rates.

ContributorsNeal, Tess M.S. (Author) / Miller, Sarah L. (Author) / Shealy, R. Clayton (Author)
Created2015-03-13