132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
650.04 KB application/pdf

Download restricted. Please sign in.
Restrictions Statement

Barrett Honors College theses and creative projects are restricted to ASU community members.

Details

Title
  • Moving Target Defense: Defending against Adversarial Defense
Contributors
Date Created
2019-05
Resource Type
  • Text
  • Machine-readable links