Visual Prompting for Adversarial Robustness
Aochuan Chen, Peter Lorenz, et al.
ICASSP 2023
Several successful adversarial attacks have demonstrated the vulnerabilities of deep learning algorithms. These attacks are detrimental in building deep learning based dependable AI applications. Therefore, it is imperative to build a defense mechanism to protect the integrity of deep learning models. In this paper, we present a novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings. The parameter-free defense layer, when applied to any convolutional network, helps in achieving protection against attacks such as FGSM, L2, Elastic-Net, and DeepFool. Experiments are performed with different CNN architectures, including VGG, ResNet, and DenseNet, on three databases, namely, MNIST, CIFAR-10, and PaSC. The results showcase the efficacy of the proposed defense layer without adding any computational overhead. For example, on the CIFAR-10 database, while the attack can reduce the accuracy of the ResNet-50 model to as low as 6.3%, the proposed "defense layer" retains the original accuracy of 81.32%.
Aochuan Chen, Peter Lorenz, et al.
ICASSP 2023
Pradip Bose, Jennifer Dworak, et al.
MICRO 2023
Rulin Shao, Zhouxing Shi, et al.
NeurIPS 2022
Chia-Hung Yuan, Pin-Yu Chen, et al.
AAAI 2022