This project explores the limits and legitimacy of neuroimaging as a means of understanding behavior and culpability in determining appropriate criminal sentencing. It highlights key philosophical issues surrounding the ability to use neuroimaging to support this process, and proposes a method of ensuring their proper use. By engaging case studies and a thought experiment, this project illustrates the circumstances in which neuroimaging may assist in identifying particular characteristics relevant for criminal sentencing.
I argue that it is not a question of whether or not neuroimaging itself holds validity in determining a criminals guilt or motives, but rather a proper application of the issue is to focus on the way in which information regarding these images is communicated from the `expert' scientists to the `non-expert' making decisions about the sentence that are most important. Those who are considering this information's relevance, a judge or jury, are typically not well versed in criminal neuroscience and interpreting the significance of different images. I advocate the way in which this information is communicated from the scientist-informer to the decision-maker parallels in importance to its actual meaning.
As a solution, I engage Roger Pielke's model of honest brokering as a solution to ensure the appropriate use of neuroimaging in determining criminal responsibility and sentencing. A thought experiment follows to highlight the limits of science, engage philosophical repercussions, and illustrate honest brokering as a means of resolution. To achieve this, a hypothetical dialogue reminiscent of Kenneth Schaffner's `tools for talking' with behavioral geneticists and courtroom professionals will exemplify these ideas.
This research analyzes and develops MMA software while considering its interactions with human physiology to assure trustworthiness. A novel app development methodology is used to objectively evaluate trustworthiness of a MMA by generating evidences using automatic techniques. It involves developing the Health-Dev β tool to generate a) evidences of trustworthiness of MMAs and b) requirements assured code generation for vulnerable components of the MMA without hindering the app development process. In this method, all requests from MMAs pass through a trustworthy entity, Trustworthy Data Manager which checks if the app request satisfies the MMA requirements. This method is intended to expedite the design to marketing process of MMAs. The objectives of this research is to develop models, tools and theory for evidence generation and can be divided into the following themes:
• Sustainable design configuration estimation of MMAs: Developing an optimization framework which can generate sustainable and safe sensor configuration while considering interactions of the MMA with the environment.
• Evidence generation using simulation and formal methods: Developing models and tools to verify safety properties of the MMA design to ensure no harm to the human physiology.
• Automatic code generation for MMAs: Investigating methods for automatically
• Performance analysis of trustworthy data manager: Evaluating response time generating trustworthy software for vulnerable components of a MMA and evidences.performance of trustworthy data manager under interactions from non-MMA smartphone apps.
The majority of trust research has focused on the benefits trust can have for individual actors, institutions, and organizations. This “optimistic bias” is particularly evident in work focused on institutional trust, where concepts such as procedural justice, shared values, and moral responsibility have gained prominence. But trust in institutions may not be exclusively good. We reveal implications for the “dark side” of institutional trust by reviewing relevant theories and empirical research that can contribute to a more holistic understanding. We frame our discussion by suggesting there may be a “Goldilocks principle” of institutional trust, where trust that is too low (typically the focus) or too high (not usually considered by trust researchers) may be problematic. The chapter focuses on the issue of too-high trust and processes through which such too-high trust might emerge. Specifically, excessive trust might result from external, internal, and intersecting external-internal processes. External processes refer to the actions institutions take that affect public trust, while internal processes refer to intrapersonal factors affecting a trustor’s level of trust. We describe how the beneficial psychological and behavioral outcomes of trust can be mitigated or circumvented through these processes and highlight the implications of a “darkest” side of trust when they intersect. We draw upon research on organizations and legal, governmental, and political systems to demonstrate the dark side of trust in different contexts. The conclusion outlines directions for future research and encourages researchers to consider the ethical nuances of studying how to increase institutional trust.