Matching Items (1,471)
Filtering by

Clear all filters

190888-Thumbnail Image.png
Description
Due to the internet being in its infancy, there is no consensus regarding policy approaches that various countries have taken. These policies range from strict government control to liberal access to the internet which makes protecting individual private data difficult. There are too many loopholes and various forms of policy

Due to the internet being in its infancy, there is no consensus regarding policy approaches that various countries have taken. These policies range from strict government control to liberal access to the internet which makes protecting individual private data difficult. There are too many loopholes and various forms of policy on how to approach protecting data. There must be effort by both the individual, government, and private entities by using theoretical mixed methods to approach protecting oneself properly online.
ContributorsPeralta, Christina A (Author) / Scheall, Scott (Thesis advisor) / Hollinger, Keith (Thesis advisor) / Alozie, Nicholas (Committee member) / Arizona State University (Publisher)
Created2023
190944-Thumbnail Image.png
Description
The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between

The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between a model’s input and output. Popular variants of these attacks have been shown to be deterred by countermeasures that poison predicted class distributions and mask class boundary gradients. Neural networks are also vulnerable to timing side-channel attacks. This work builds on top of Subneural, an attack framework that uses floating point timing side channels to extract neural structures. Novel applications of addition timing side channels are introduced, allowing the signs and arrangements of leaked parameters to be discerned more efficiently. Addition timing is also used to leak network biases, making the framework applicable to a wider range of targets. The enhanced framework is shown to be effective against models protected by prediction poisoning and gradient masking adversarial countermeasures and to be competitive with adaptive black box adversarial attacks against stateful defenses. Mitigations necessary to protect against floating-point timing side-channel attacks are also presented.
ContributorsVipat, Gaurav (Author) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2023
ContributorsNickels, Derek (Performer) / ASU Library. Music Library (Publisher)
Created1994-03-25
ContributorsBurkhardt, Michael (Performer) / ASU Library. Music Library (Publisher)
Created1998-04-17
ContributorsQuamme, Gary (Performer) / ASU Library. Music Library (Publisher)
Created2004-12-07
ContributorsClark, Robert, 1931- (Performer) / ASU Library. Music Library (Publisher)
Created2003-02-23
ContributorsBates, Robert (Performer) / ASU Library. Music Library (Publisher)
Created1999-02-21
ContributorsReas, Keith S (Performer) / ASU Library. Music Library (Publisher)
Created1994-02-06
ContributorsBrewer, Dwight W. (Performer) / ASU Library. Music Library (Publisher)
Created1986-09-04
ContributorsRamsey, Mark D (Performer) / ASU Library. Music Library (Publisher)
Created1990-03-22