Matching Items (2)
Filtering by

Clear all filters

131746-Thumbnail Image.png
Description
Culture is a living, dynamic concept that influences the lives of all human beings, making it one of the cornerstone building blocks of the human experience. However, there is a widespread assumption that culture matters more for some people than others. Recent studies have found evidence of a cultural (mis)attribution

Culture is a living, dynamic concept that influences the lives of all human beings, making it one of the cornerstone building blocks of the human experience. However, there is a widespread assumption that culture matters more for some people than others. Recent studies have found evidence of a cultural (mis)attribution bias among psychologists, the tendency to exaggerate the role of cultural factors in the behavior of racial/ethnic minorities while simultaneously exaggerating the role of personal psychological factors in the behavior of the racial/ethnic majority (Causadias, Vitriol, & Atkins, 2018a; 2018b). This study aims to explore the cultural (mis)attribution bias, and how it manifests in the beliefs and attitudes of undergraduate students at ASU. Additionally, this paper will also explore the implications of those results and how to apply that knowledge to our daily interactions with the people around us.
ContributorsKwon, Woochan (Author) / Causadias, José (Thesis director) / Pedram, Christina (Committee member) / Korous, Kevin (Committee member) / Sanford School of Social and Family Dynamics (Contributor) / Department of Psychology (Contributor) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
161967-Thumbnail Image.png
Description
Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to

Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to force models to avoid being exposed to biases. However, the filtering leads to a considerable wastage of resources as most of the dataset created is discarded as biased. This work deals with avoiding the wastage of resources by identifying and quantifying the biases. I further elaborate on the implications of dataset filtering on robustness (to adversarial attacks) and generalization (to out-of-distribution samples). The findings suggest that while dataset filtering does help to improve OOD(Out-Of-Distribution) generalization, it has a significant negative impact on robustness to adversarial attacks. It also shows that transforming bias-inducing samples into adversarial samples (instead of eliminating them from the dataset) can significantly boost robustness without sacrificing generalization.
ContributorsSachdeva, Bhavdeep Singh (Author) / Baral, Chitta (Thesis advisor) / Liu, Huan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021