Technologies for magnetic tape recording at 100Gb/in2 and beyond
Mark A. Lantz, Simeon Furrer, et al.
INTERMAG 2015
Performing computations on conventional von Neumann computing systems results in a significant amount of data being moved back and forth between the physically separated memory and processing units. This costs time and energy, and constitutes an inherent performance bottleneck. In-memory computing is a novel non-von Neumann approach, where certain computational tasks are performed in the memory itself. This is enabled by the physical attributes and state dynamics of memory devices, in particular, resistance-based nonvolatile memory technology. Several computational tasks such as logical operations, arithmetic operations, and even certain machine learning tasks can be implemented in such a computational memory unit. In this article, we first introduce the general notion of in-memory computing and then focus on mixed-precision deep learning training with in-memory computing. The efficacy of this new approach will be demonstrated by training the MNIST multilayer perceptron network achieving high accuracy. Moreover, we show how the precision of in-memory computing can be further improved through architectural and device-level innovations. Finally, we present system aspects, such as high-level system architecture, including core-to-core interconnect technologies, and high-level ideas and concepts of the software stack.
Mark A. Lantz, Simeon Furrer, et al.
INTERMAG 2015
Abu Sebastian, Irem Boybat, et al.
VLSI Circuits 2019
Evangelos Eleftheriou, Walter Hirt
IEEE Transactions on Magnetics
Manuel Le Gallo, Abu Sebastian, et al.
IEEE T-ED