Black-box adversarial attacks on video recognition models
L Jiang, X Ma, S Chen, J Bailey, YG Jiang
Proceedings of the 27th ACM International Conference on Multimedia | Association for Computing Machinery | Published : 2019
Deep neural networks (DNNs) are known for their vulnerability to adversarial examples. These are examples that have undergone small, carefully crafted perturbations, and which can easily fool a DNN into making misclassifications at test time. Thus far, the field of adversarial research has mainly focused on image models, under either a white-box setting, where an adversary has full access to model parameters, or a black-box setting where an adversary can only query the target model for probabilities or labels. Whilst several white-box attacks have been proposed for video models, black-box video attacks are still unexplored. To close this gap, we propose the first black-box video attack frame..View full abstract
Awarded by National Key Research and Development Program of China
Awarded by National Natural Science Foundation of China
This work was supported in part by National Key Research and Development Program of China under Grant 2018YFB1004300 and National Natural Science Foundation of China under Grant 61622204.