Journal article

Understanding adversarial attacks on deep learning based medical image analysis systems

X Ma, Y Niu, L Gu, Y Wang, Y Zhao, J Bailey, F Lu

Pattern Recognition | Elsevier | Published : 2021

Abstract

Deep neural networks (DNNs) have become popular for medical image analysis tasks like cancer diagnosis and lesion detection. However, a recent study demonstrates that medical deep learning systems can be compromised by carefully-engineered adversarial examples/attacks with small imperceptible perturbations. This raises safety concerns about the deployment of these systems in clinical settings. In this paper, we provide a deeper understanding of adversarial examples in the context of medical images. We find that medical DNN models can be more vulnerable to adversarial attacks compared to models for natural images, according to two different viewpoints. Surprisingly, we also find that medical ..

View full abstract

Grants

Awarded by National Natural Science Foundation of China (NSFC)


Awarded by JST, ACT-X Grant, Japan


Awarded by Zhejiang Provincial Natural Science Foundation of China


Funding Acknowledgements

This work was supported by National Natural Science Foundation of China (NSFC) under Grant 61972012 and JST, ACT-X Grant Number JPMJAX190D, Japan and Zhejiang Provincial Natural Science Foundation of China (LZ19F010001).