Journal article

The Vulnerability of Learning to Adversarial Perturbation Increases with Intrinsic Dimensionality

Laurent Amsaleg, James Bailey, Dominique Barbe, Sarah Erfani, Michael E Houle, Nguyen Vinh, Milos Radovanovic

NII Technical Reports | IEEE | Published : 2017

Abstract

Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. Furthermore, these adversarial samples generalize well across models: samples generated using one network can often succeed in fooling other networks or machine learning models. These alarming properties of adversarial samples have drawn increasing interest recently, with several researchers having attributed the adversarial effect to different factors, such as the..

View full abstract

Grants

Awarded by Australian Research Council


Awarded by JSPS Kakenhi


Awarded by Serbian Ministry of Education, Science and Technological Development


Funding Acknowledgements

Laurent Amsaleg is in part supported by the European CHIST-ERA ID_IOT project. James Bailey, Sarah Erfani and Vinh Nguyen are in part supported by the Australian Research Council via grant number DP140101969. Vinh Nguyen is in part supported by a University of Melbourne ECR grant. Michael E. Houle is in part supported by JSPS Kakenhi Kiban (A) Research Grant 25240036 and Kiban (B) Research Grant 15H02753. Milos Radovanovic is in part supported by the Serbian Ministry of Education, Science and Technological Development through project number OI174023.