Conference Proceedings
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack
R Gao, J Wang, K Zhou, F Liu, B Xie, G Niu, B Han, J Cheng
Proceedings of Machine Learning Research | JMLR-JOURNAL MACHINE LEARNING RESEARCH | Published : 2022
Abstract
The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available. However, the high computational cost (e.g., 100 times more than that of the project gradient descent (PGD-20) attack) makes AA infeasible for practitioners with limited computational resources, and also hinders applications of AA in the adversarial training (AT). In this paper, we propose a novel method, minimum-margin (MM) attack, to fast and reliably evaluate adversarial robustness. Compared with AA, our method achieves comparable performance but only costs 3% of the computational time in extensive experiments. The reliability of our method lies ..
View full abstractGrants
Awarded by National Natural Science Foundation of China
Funding Acknowledgements
RZG, JXW, KWZ, BHX and JC were supported by GRF 14208318 from the RGC of HKSAR. BH was supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202, and Guangdong Basic and Applied Basic Research Foundation No. 2022A1515011652. GN were supported by JST AIP Acceleration Research Grant Number JPMJCR20U3, Japan.