Pritish Parida, Timothy Chainer, et al.
ARPA-E Summit 2023
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks. Inherently non-deterministic compute substrates, such as those based on Analog In-Memory Computing (AIMC), have been speculated to provide significant adversarial robustness when performing DNN inference. In this paper, we experimentally validate this conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices. We demonstrate higher adversarial robustness against different types of adversarial attacks when implementing an image classification network. Additional robustness is also observed when performing hardware-in-the-loop attacks, for which the attacker is assumed to have full access to the hardware. A careful study of the various noise sources indicate that a combination of stochastic noise sources (both recurrent and non-recurrent) are responsible for the adversarial robustness and that their type and magnitude disproportionately effects this property. Finally, it is demonstrated, via simulations, that when a much larger transformer network is used to implement a Natural Language Processing (NLP) task, additional robustness is still observed.
Pritish Parida, Timothy Chainer, et al.
ARPA-E Summit 2023
Dionysios Diamantopoulos, Burkhard Ringlein, et al.
CLOUD 2023
Haoran Qiu, Weichao Mao, et al.
MLSys 2024
Kaoutar El Maghraoui, Kim Tran, et al.
SSE 2024