Conference Proceedings
An efficient adversarial learning strategy for constructing robust classification boundaries
W Liu, S Chawla, J Bailey, C Leckie, K Ramamohanarao
Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics | Published : 2012
Abstract
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg ..
View full abstractGrants
Awarded by Australian Research Council