Filtering by
- All Subjects: Bias
- Creators: Liu, Huan
- Creators: Grisso, Thomas
- Resource Type: Text
Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect.
The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them.
The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study.
We integrate multiple domains of psychological science to identify, better understand, and manage the effects of subtle but powerful biases in forensic mental health assessment. This topic is ripe for discussion, as research evidence that challenges our objectivity and credibility garners increased attention both within and outside of psychology. We begin by defining bias and provide rich examples from the judgment and decision making literature as they might apply to forensic assessment tasks. The cognitive biases we review can help us explain common problems in interpretation and judgment that confront forensic examiners. This leads us to ask (and attempt to answer) how we might use what we know about bias in forensic clinicians’ judgment to reduce its negative effects.
We conducted an international survey in which forensic examiners who were members of professional associations described their two most recent forensic evaluations (N=434 experts, 868 cases), focusing on the use of structured assessment tools to aid expert judgment. This study describes:
1. The relative frequency of various forensic referrals.
2. What tools are used globally.
3. Frequency and type of structured tools used.
4. Practitioners’ rationales for using/not using tools.
We provide general descriptive information for various referrals. We found most evaluations used tools (74.2%) and used several (on average 4). We noted the extreme variety in tools used (286 different tools). We discuss the implications of these findings and provide suggestions for improving the reliability and validity of forensic expert judgment methods. We conclude with a call for an assessment approach that seeks structured decision methods to advance greater efficiency in the use and integration of case-relevant information.