IBM NorthPole Neural Inference Machine
Dharmendra S. Modha, Filipp Akopyan, et al.
HCS 2023
We design and implement a key building block of a scalable neuromorphic architecture capable of running spiking neural networks in compact and low-power hardware. Our innovation is a configurable neurosynaptic core that combines 256 integrate-and-fire neurons, 1024 input axons, and 1024x256 synapses in 4.2mm2 of silicon using a 45nm SOI process. We are able to achieve ultra-low energy consumption 1) at the circuit-level by using an asynchronous design where circuits only switch while performing neural updates, 2) at the core-level by implementing a 256 neural fan out in a single operation using a crossbar memory, and 3) at the architecture-level by restricting core-to-core communication to spike events, which occur relatively sparsely in time. Our implementation is purely digital, resulting in reliable and deterministic operation that achieves for the first time one-to-one correspondence with a software simulator. At 45pJ per spike, our core is readily scalable and provides a platform for implementing a wide array of real-time computations. As an example, we demonstrate a sound localization system using coincidence-detecting neurons. © 2012 IEEE.
Dharmendra S. Modha, Filipp Akopyan, et al.
HCS 2023
Dharmendra S. Modha, Filipp Akopyan, et al.
Science
John V. Arthur, Paul A. Merolla, et al.
IJCNN 2012
Paul Merolla, John Arthur, et al.
CICC 2011