Conference Proceedings

Attacking Data Transforming Learners at Training Time

Scott Alfeld, Ara Vartanian, Lucas Newman-Johnson, Benjamin IP Rubinstein

Thirty-Third AAAI Conference on Artificial Intelligence | Association for the Advancement of Artificial Intelligence | Published : 2019


While machine learning systems are known to be vulnerable to data-manipulation attacks at both training and deployment time, little is known about how to adapt attacks when the defender transforms data prior to model estimation. We consider the setting where the defender Bob first transforms the data then learns a model from the result; Alice, the attacker, perturbs Bob’s input data prior to him transforming it. We develop a general-purpose “plug and play” framework for gradient-based attacks based on matrix differentials, focusing on ordinary least-squares linear regression. This allows learning algorithms and data transformations to be paired and composed arbitrarily: attacks can be adapte..

View full abstract


Awarded by Australian Research Council

Funding Acknowledgements

This work was supported by The Gregory S. Call Undergraduate Research Program at Amherst College and the Australian Research Council (DE160100584).