Journal article

Support vector machines resilient against training data integrity attacks

S Weerasinghe, SM Erfani, T Alpcan, C Leckie

Pattern Recognition | Elsevier BV | Published : 2019

Abstract

Support Vector Machines (SVMs) are vulnerable to integrity attacks, where malicious attackers distort the training data in order to compromise the decision boundary of the learned model. With increasing real-world applications of SVMs, malicious data that is classified as innocuous may have harmful consequences. This paper presents a novel framework that utilizes adversarial learning, nonlinear data projections, and game theory to improve the resilience of SVMs against such training-data-integrity attacks. The proposed approach introduces a layer of uncertainty through the use of random projections on top of the learners, making it challenging for the adversary to guess the specific configur..

View full abstract

Grants

Awarded by Australian Research Council


Funding Acknowledgements

This work was supported in part by the Australian Research Council Discovery Project under Grant DP140100819, and by Northrop Grumman Mission Systems’ University Research Program. The authors thank Prof. Margreta Kuijper for the helpful comments and discussions.