Full metadata
Title
Responsible Machine Learning: Security, Robustness, and Causality
Description
In the age of artificial intelligence, Machine Learning (ML) has become a pervasive force, impacting countless aspects of our lives. As ML’s influence expands, concerns about its reliability and trustworthiness have intensified, with security and robustness emerging as significant challenges. For instance, it has been demonstrated that slight perturbations to a stop sign can cause ML classifiers to misidentify it as a speed limit sign, raising concerns about whether ML algorithms are suitable for real-world deployments. To tackle these issues, Responsible Machine Learning (Responsible ML) has emerged with a clear mission: to develop secure and robust ML algorithms. This dissertation aims to develop Responsible Machine Learning algorithms under real-world constraints. Specifically, recognizing the role of adversarial attacks in exposing security vulnerabilities and robustifying the ML methods, it lays down the foundation of Responsible ML by outlining a novel taxonomy of adversarial attacks within real-world settings, categorizing them into black-box target-specific, and target-agnostic attacks. Subsequently, it proposes potent adversarial attacks in each category, aiming to obtain effectiveness and efficiency. Transcending conventional boundaries, it then introduces the notion of causality into Responsible ML (a.k.a., Causal Responsible ML), presenting the causal adversarial attack. This represents the first principled framework to explain the transferability of adversarial attacks to unknown models by identifying their common source of vulnerabilities, thereby exposing the pinnacle of threat and vulnerability: conducting successful attacks on any model with no prior knowledge.
Finally, acknowledging the surge of Generative AI, this dissertation explores Responsible ML for Generative AI. It introduces a novel adversarial attack that unveils their adversarial vulnerabilities and devises a strong defense mechanism to bolster the models’ robustness against potential attacks.
Date Created
2024
Contributors
- Moraffah, Raha (Author)
- Liu, Huan (Thesis advisor)
- Yang, Yezhou (Committee member)
- Xiao, Chaowei (Committee member)
- Turaga, Pavan (Committee member)
- Carley, Kathleen (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
205 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.193546
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: Ph.D., Arizona State University, 2024
Field of study: Computer Science
System Created
- 2024-05-02 02:03:03
System Modified
- 2024-05-02 02:03:11
- 7 months ago
Additional Formats