Technical note
3 minute read

Large memory storage through astrocyte computation

What a long-overlooked cell in the brain can tell us about cognition and AI architectures.

IBM RESEARCH_BlogPost_aug31.jpg

The emergence of neurons nearly half a billion years ago marked a pivotal moment in the evolution of complex life. But neurons were not alone in this journey. Around the same time, another kind of cell evolved: the glial cell.

Ever since, neurons and glia have been inseparable partners in the architecture of the brain. In fact, neurons cannot survive in isolation without the support of glial cells, which, among many other things, maintain homeostasis. Despite this co-dependence, neuroscience has historically centered on neurons, and for good reason, given their strong electrical activity and evident role in animals’ behavior. Electrodes placed near a neuron capture a steady “pop, pop, pop” — the signature of neuronal firing — while interfering with this firing pattern typically disrupts behavior itself. In contrast, glial cells remain electrically silent under such measurements, which scientists took for decades to mean they were inert.

However, recent advances in imaging techniques using calcium-sensitive dyes have revealed a more nuanced picture. When glial cells are observed through these dyes, they light up like brilliant constellations, indicating that they too harbor their own forms of activity and computation. Decades of careful experimentation have demonstrated that glial cells — particularly the subset called astrocytes — are not mere bystanders. Changes in how neurons and astrocytes communicate can directly reshape behavior, pointing to a deeper, previously overlooked role for astrocytes in cognition.

Astrocytes’ intimate connection with neurons can be seen in their anatomy. A single astrocyte extends processes that wrap around millions of nearby synapses, forming what is known as a tri-partite synapse. In these tri-partite junctions, astrocytes sense neural activity by absorbing neurotransmitters released during synaptic transmission. This sensing triggers increases in internal calcium within the astrocyte processes, initiating a cascade of biochemical reactions. In turn, astrocytes release their own signals — called gliotransmitters — back into the synaptic cleft.

Because each astrocyte interfaces with millions of synapses, these signals can mediate interactions across vast networks of neurons, suggesting a form of heterosynaptic communication. This mechanism hints at an expanded computational repertoire in the brain, linking the activities of many neurons through the astrocyte intermediary.

Such interactions bear striking mathematical resemblance to Dense Associative Memory (DenseAM) models, a concept in machine learning proposed by Dmitry Krotov and John Hopfield in 2016 as an extension of the classic Hopfield network. Unlike traditional Hopfield networks — where synapses link only pairs of neurons — DenseAMs require many neurons to converge at shared sites of interaction, a property that had previously seemed absent in real neural circuits in the brain. However, the astrocyte’s ability to integrate signals from multiple synapses fills this gap. By formalizing these neuron-astrocyte interactions within the DenseAM framework, our study demonstrates that networks containing both neurons and astrocytes can store and retrieve far more memories than neuron-only systems.

This formulation not only bridges biology and computation but also suggests a radical rethinking of memory itself. Rather than being encoded solely at neuronal synapses, memories in this model are distributed across the dense web of astrocyte processes. Astrocytes thus emerge not as passive supporters of neuronal function, but as active participants in the storage and retrieval of information.

Remarkably, our neuron-astrocyte networks achieve the best-known scaling for memory capacity in any biological implementation of Dense Associative Memory, with capacity per compute unit growing proportionally to the number of neurons involved. Such scaling may hold potential implications for emerging technologies like neuromorphic computing, where maximizing memory efficiency is paramount.

Moreover, these findings resonate with modern developments in artificial intelligence. Variations in astrocyte connectivity mirror the flexible attention mechanisms used in transformer architectures, suggesting a biological substrate that unifies principles from both Dense Associative Memory and attention-based networks. In practical terms, simulations on visual datasets such as CIFAR-10 and Tiny ImageNet have confirmed that neuron-astrocyte networks can accurately store and recall complex visual patterns, even under conditions of noise and partial occlusion — properties crucial for robust biological and artificial memory systems alike.

Looking into the future, direct experimental tests of astrocyte activity — through targeted manipulations of calcium signaling or process connectivity — could provide decisive evidence for their computational roles. Additional computational studies refining the interplay between astrocytic and neuronal dynamics promise to deepen our understanding of cognition itself.

Long overshadowed by their more electrically active counterparts, astrocytes are now stepping into the scientific spotlight. As active participants in memory and learning, they offer a new vantage point on the brain’s inner workings — one that could inform not only our understanding of biology, but also how we design future computing systems.

Related posts