Publication
CIMTEC 2024
Invited talk

Deep neural network inference with a 64-core in-memory compute chip based on phase-change memory

Abstract

The need to repeatedly shuttle around synaptic weight values from memory to processing units has been a key source of energy inefficiency associated with hardware implementation of artificial neural networks. Analog in-memory computing (AIMC) with spatially instantiated synaptic weights holds high promise to overcome this challenge, by performing matrix-vector multiplications directly within the network weights stored on a chip to execute an inference workload. We designed and fabricated a multi-core AIMC chip in 14-nm complementary metal–oxide–semiconductor (CMOS) technology with backend-integrated phase-change memory (PCM). The fully-integrated chip features 64 256x256 AIMC cores interconnected via an on-chip communication network. In this talk, I will present our latest efforts in employing this chip for performing inference of deep neural networks. First, the PCM technology and computational unit-cell we use will be described. Next, experimental inference results on ResNet and LSTM networks will be presented, with all the computations associated with the weight layers and the activation functions implemented on-chip. Finally, I will present our open-source toolkit (https://aihw-composer.draco.res.ibm.com/) to simulate inference and training of neural networks with AIMC.

Date

Publication

CIMTEC 2024

Authors

Topics

Share