Tess Neal is an Assistant Professor of Psychology in the ASU New College of Interdisciplinary Arts and Sciences and is a founding faculty member of the [Program on Law and Behavioral Science](http://lawpsych.asu.edu/). Dr. Neal has published one edited book and more than three dozen peer-reviewed publications in such journals as PLOS ONE; American Psychologist; Psychology, Public Policy, and Law; and Criminal Justice and Behavior. Neal is the recipient of the 2016 Saleem Shah Award for Early Career Excellence in Psychology and Law, co-awarded by the American Psychology-Law Society and the American Academy of Forensic Psychology. She was named a 2016 "Rising Star" by the Association for Psychological Science, a designation that recognizes outstanding psychological scientists in the earliest stages of their research career post-PhD "whose innovative work has already advanced the field and signals great potential for their continued contributions." She directs the ASU [Clinical and Legal Judgment Lab](http://psych-law.lab.asu.edu).

Displaying 1 - 2 of 2
Filtering by

Clear all filters

141310-Thumbnail Image.png
Description

This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report-writing behaviors on several of the indicators of interest, the data are most

This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report-writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data was used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.

ContributorsNeal, Tess M.S. (Author)
Created2018-04-19
141311-Thumbnail Image.png
Description

The question as to whether the assessment of adaptive behavior (AB) for evaluations of intellectual disability (ID) in the community meet the level of rigor necessary for admissibility in legal cases is addressed. Adaptive behavior measures have made their way into the forensic domain where scientific evidence is put under

The question as to whether the assessment of adaptive behavior (AB) for evaluations of intellectual disability (ID) in the community meet the level of rigor necessary for admissibility in legal cases is addressed. Adaptive behavior measures have made their way into the forensic domain where scientific evidence is put under great scrutiny. Assessment of ID in capital murder proceedings has garnished a lot of attention, but assessments of ID in adult populations also occur with some frequency in the context of other criminal proceedings (e.g., competence to stand trial; competence to waive Miranda rights), as well as eligibility for social security disability, social security insurance, Medicaid/Medicare, government housing, and post-secondary transition services. As will be demonstrated, markedly disparate findings between raters can occur on measures of AB even when the assessment is conducted in accordance with standard procedures (i.e., the person was assessed in a community setting, in real time, with multiple appropriate raters, when the person was younger than 18 years of age) and similar disparities can be found in the context of the unorthodox and untested retrospective assessment used in capital proceedings. With full recognition that some level of disparity is to be expected, the level of disparity that can arise when these measures are administered retrospectively calls into question the validity of the results and consequently, their probative value.

ContributorsSalekin, Karen L. (Author) / Neal, Tess M.S. (Author) / Hedge, Krystal A. (Author)
Created2018-02-01