Humans use emotions to communicate social cues to our peers on a daily basis. Are we able to identify context from facial expressions and match them to specific scenarios? This experiment found that people can effectively distinguish negative and positive emotions from each other from a short description. However, further research is needed to find out whether humans can learn to perceive emotions only from contextual explanations.
This study analyzed currently existing statute at the state, federal, and international level to ultimately build a criteria of recommendations for policymakers to consider when building regulations for facial recognition technology usage by law enforcement agencies within the United States.
Artificial Intelligence’s facial recognition programs are inherently racially biased. The programs are not necessarily created with the intent to disproportionately impact marginalized communities, but through their data mining process of learning, they can become biased as the data they use may train them to think in a biased manner. Biased data is difficult to spot as the programming field is homogeneous and this issue reflects underlying societal biases. Facial recognition programs do not identify minorities at the same rate as their Caucasian counterparts leading to false positives in identifications and an increase of run-ins with the law. AI does not have the ability to role-reverse judge as a human does and therefore its use should be limited until a more equitable program is developed and thoroughly tested.
Affective video games are still a relatively new field of research and entertainment. Even
so, being a form of entertainment media, emotion plays a large role in video games as a whole.
This project seeks to gain an understanding of what emotions are most prominent during game
play. From there, a system will be created wherein the game will record the player’s facial
expressions and interpret those expressions as emotions, allowing the game to adjust its difficulty
to create a more tailored experience.
The first portion of this project, understanding the relationship between emotions and
games, was done by recording myself as I played three different games of different genres for
thirty minutes each. The same system that would be used in the later game I created to evaluate
emotions was used to evaluate these recordings.
After the data was interpreted, I created three different versions of the same game, based
on a template created by Stan’s Assets, which was a version of the arcade game Stacker. The
three versions of the game included one where no changes were made to the gameplay
experience, it simply recorded the player’s face and extrapolated emotions from that recording,
one where the speed increased in an attempt to maintain a certain level of positive emotions, and
a third where, in addition to increasing the speed of the game, it also decreased the speed in an
attempt to minimize negative emotions.
These tests, together, show that the emotional experience of a player is heavily dependent
on how tailored the game is towards that particular emotion. Additionally, in creating a system
meant to interact with these emotions, it is easier to create a one-dimensional system that focuses
on one emotion (or range of emotions) as opposed to a more complex system, as the system
begins to become unstable, and can lead to undesirable gameplay effects.
This thesis explores the ethical implications of using facial recognition artificial intelligence (AI) technologies in medicine, with a focus on both the opportunities and challenges presented by the use of this technology in the diagnosis and treatment of rare genetic disorders. We highlight the positive outcomes of using AI in medicine, such as accuracy and efficiency in diagnosing rare genetic disorders, while also examining the ethical concerns including bias, misdiagnosis, the issues it may cause within patient-clinician relationships, misuses outside of medicine, and privacy. This paper draws on the opinions of medical providers and other professionals outside of medicine, which finds that while many are excited about the potential of AI to improve medicine, concerns remain about the ethical implications of these technologies. We discuss current legislation controlling the use of AI in healthcare and its ambiguity. Overall, this thesis highlights the need for further research and public discourse to address the ethical implications of using facial recognition and AI technologies in medicine, while also providing recommendations for its future use in medicine.
This thesis explores the ethical implications of using facial recognition artificial intelligence (AI) technologies in medicine, with a focus on both the opportunities and challenges presented by the use of this technology in the diagnosis and treatment of rare genetic disorders. We highlight the positive outcomes of using AI in medicine, such as accuracy and efficiency in diagnosing rare genetic disorders, while also examining the ethical concerns including bias, misdiagnosis, the issues it may cause within patient-clinician relationships, misuses outside of medicine, and privacy. This paper draws on the opinions of medical providers and other professionals outside of medicine, which finds that while many are excited about the potential of AI to improve medicine, concerns remain about the ethical implications of these technologies. We discuss current legislation controlling the use of AI in healthcare and its ambiguity. Overall, this thesis highlights the need for further research and public discourse to address the ethical implications of using facial recognition and AI technologies in medicine, while also providing recommendations for its future use in medicine.