Conference Proceedings

Clean-Label Backdoor Attacks on Video Recognition Models

S Zhao, X Ma, X Zheng, J Bailey, J Chen, YG Jiang

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Published : 2020


Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor triggers in DNNs by poisoning training data. A backdoored model behaves normally on clean test images, yet consistently predicts a particular target class for any test examples that contain the trigger pattern. As such, backdoor attacks are hard to detect, and have raised severe security concerns in real-world applications. Thus far, backdoor research has mostly been conducted in the image domain with image classification models. In this paper, we show that existing image backdoor attacks are far less effective on videos, and outline 4 strict conditions where existing attacks are likely to fail: 1) scenari..

View full abstract


Citation metrics