This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 4 of 4
Filtering by

Clear all filters

161967-Thumbnail Image.png
Description
Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to

Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to force models to avoid being exposed to biases. However, the filtering leads to a considerable wastage of resources as most of the dataset created is discarded as biased. This work deals with avoiding the wastage of resources by identifying and quantifying the biases. I further elaborate on the implications of dataset filtering on robustness (to adversarial attacks) and generalization (to out-of-distribution samples). The findings suggest that while dataset filtering does help to improve OOD(Out-Of-Distribution) generalization, it has a significant negative impact on robustness to adversarial attacks. It also shows that transforming bias-inducing samples into adversarial samples (instead of eliminating them from the dataset) can significantly boost robustness without sacrificing generalization.
ContributorsSachdeva, Bhavdeep Singh (Author) / Baral, Chitta (Thesis advisor) / Liu, Huan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
193546-Thumbnail Image.png
Description
In the age of artificial intelligence, Machine Learning (ML) has become a pervasive force, impacting countless aspects of our lives. As ML’s influence expands, concerns about its reliability and trustworthiness have intensified, with security and robustness emerging as significant challenges. For instance, it has been demonstrated that slight perturbations to

In the age of artificial intelligence, Machine Learning (ML) has become a pervasive force, impacting countless aspects of our lives. As ML’s influence expands, concerns about its reliability and trustworthiness have intensified, with security and robustness emerging as significant challenges. For instance, it has been demonstrated that slight perturbations to a stop sign can cause ML classifiers to misidentify it as a speed limit sign, raising concerns about whether ML algorithms are suitable for real-world deployments. To tackle these issues, Responsible Machine Learning (Responsible ML) has emerged with a clear mission: to develop secure and robust ML algorithms. This dissertation aims to develop Responsible Machine Learning algorithms under real-world constraints. Specifically, recognizing the role of adversarial attacks in exposing security vulnerabilities and robustifying the ML methods, it lays down the foundation of Responsible ML by outlining a novel taxonomy of adversarial attacks within real-world settings, categorizing them into black-box target-specific, and target-agnostic attacks. Subsequently, it proposes potent adversarial attacks in each category, aiming to obtain effectiveness and efficiency. Transcending conventional boundaries, it then introduces the notion of causality into Responsible ML (a.k.a., Causal Responsible ML), presenting the causal adversarial attack. This represents the first principled framework to explain the transferability of adversarial attacks to unknown models by identifying their common source of vulnerabilities, thereby exposing the pinnacle of threat and vulnerability: conducting successful attacks on any model with no prior knowledge. Finally, acknowledging the surge of Generative AI, this dissertation explores Responsible ML for Generative AI. It introduces a novel adversarial attack that unveils their adversarial vulnerabilities and devises a strong defense mechanism to bolster the models’ robustness against potential attacks.
ContributorsMoraffah, Raha (Author) / Liu, Huan (Thesis advisor) / Yang, Yezhou (Committee member) / Xiao, Chaowei (Committee member) / Turaga, Pavan (Committee member) / Carley, Kathleen (Committee member) / Arizona State University (Publisher)
Created2024
155802-Thumbnail Image.png
Description
Anderies (2015); Anderies et al. (2016), informed by Ostrom (2005), aim to employ robust

feedback control models of social-ecological systems (SESs), to inform policy and the

design of institutions guiding resilient resource use. Cote and Nightingale (2012) note that

the main assumptions of resilience research downplay culture and social power. Addressing

the epistemic ga

Anderies (2015); Anderies et al. (2016), informed by Ostrom (2005), aim to employ robust

feedback control models of social-ecological systems (SESs), to inform policy and the

design of institutions guiding resilient resource use. Cote and Nightingale (2012) note that

the main assumptions of resilience research downplay culture and social power. Addressing

the epistemic gap between positivism and interpretation (Rosenberg 2016), this dissertation

argues that power and culture indeed are of primary interest in SES research.

Human use of symbols is seen as an evolved semiotic capacity. First, representation is

argued to arise as matter achieves semiotic closure (Pattee 1969; Rocha 2001) at the onset

of natural selection. Guided by models by Kauffman (1993), the evolution of a symbolic

code in genes is examined, and thereon the origin of representations other than genetic

in evolutionary transitions (Maynard Smith and Szathmáry 1995; Beach 2003). Human

symbolic interaction is proposed as one that can support its own evolutionary dynamics.

The model offered for wider dynamics in society are “flywheels,” mutually reinforcing

networks of relations. They arise as interactions in a domain of social activity intensify, e.g.

due to interplay of infrastructures, mediating built, social, and ecological affordances (An-

deries et al. 2016). Flywheels manifest as entities facilitated by the simplified interactions

(e.g. organizations) and as cycles maintaining the infrastructures (e.g. supply chains). They

manifest internal specialization as well as distributed intention, and so can favor certain

groups’ interests, and reinforce cultural blind spots to social exclusion (Mills 2007).

The perspective is applied to research of resilience in SESs, considering flywheels a

semiotic extension of feedback control. Closer attention to representations of potentially

excluded groups is justified on epistemic in addition to ethical grounds, as patterns in cul-

tural text and social relations reflect the functioning of wider social processes. Participatory

methods are suggested to aid in building capacity for institutional learning.
ContributorsBožičević, Miran (Author) / Anderies, John M (Thesis advisor) / Bolin, Robert (Committee member) / BurnSilver, Shauna (Committee member) / Arizona State University (Publisher)
Created2017
Description
Autonomous Driving (AD) systems are being researched and developed actively in recent days to solve the task of controlling the vehicles safely without human intervention. One method to solve such task is through deep Reinforcement Learning (RL) approach. In deep RL, the main objective is to find an optimal control

Autonomous Driving (AD) systems are being researched and developed actively in recent days to solve the task of controlling the vehicles safely without human intervention. One method to solve such task is through deep Reinforcement Learning (RL) approach. In deep RL, the main objective is to find an optimal control behavior, often called policy performed by an agent, which is AD system in this case. This policy is usually learned through Deep Neural Networks (DNNs) based on the observations that the agent perceives along with rewards feedback received from environment.However, recent studies demonstrated the vulnerability of such control policies learned through deep RL against adversarial attacks. This raises concerns about the application of such policies to risk-sensitive tasks like AD. Previous adversarial attacks assume that the threats can be broadly realized in two ways: First one is targeted attacks through manipu- lation of the agent’s complete observation in real time and the other is untargeted attacks through manipulation of objects in environment. The former assumes full access to the agent’s observations at almost all time, while the latter has no control over outcomes of attack. This research investigates the feasibility of targeted attacks through physical adver- sarial objects in the environment, a threat that combines the effectiveness and practicality. Through simulations on one of the popular AD systems, it is demonstrated that a fixed optimal policy can be malfunctioned over time by an attacker e.g., performing an unintended self-parking, when an adversarial object is present. The proposed approach is formulated in such a way that the attacker can learn a dynamics of the environment and also utilizes common knowledge of agent’s dynamics to realize the attack. Further, several experiments are conducted to show the effectiveness of the proposed attack on different driving scenarios empirically. Lastly, this work also studies robustness of object location, and trade-off between the attack strength and attack length based on proposed evaluation metrics.
ContributorsBuddareddygari, Prasanth (Author) / Yang, Yezhou (Thesis advisor) / Ren, Yi (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2021