Conference Proceedings

The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality

L Amsaleg, J Bailey, D Barbe, S Erfani, ME Houle, V Nguyen, M Radovanovic

2017 IEEE Workshop on Information Forensics and Security (WIFS) | IEEE Explore | Published : 2018

Abstract

© 2017 IEEE. Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. It has also been observed that these adversarial samples generalize well across models. A complete understanding of the nature of adversarial samples has not yet emerged. Towards this goal, we present a novel theoretical result formally linking the adversarial vulnerability of learning to the intrinsic dimensionality of the data. In particular, our inv..

View full abstract

Citation metrics