New Brain Map Charts Every Component in the Biological Universe


Neurons make up less than half of the brain. Yet when it comes to brain mapping, they get all the limelight.

It’s easy to see why: as shockingly powerful mini-processors, neurons and their connections—together dubbed the connectome—hold the secret to highly efficient and flexible computation. Nestled inside the brain’s wiring diagrams are the keys to consciousness, memories, and emotion. To connectomics, mapping the brain isn’t just an academic exercise to better understand ourselves—it could lead to more efficient AI that thinks like us.

But often ignored are the brain’s supporting characters: astrocytes—brain cells shaped like stars—and microglia, specialized immune cells. Previously considered “wallflowers,” these cells nurture neurons and fine-tune their connections, ultimately shaping the connectome. Without this long-forgotten half, the brain wouldn’t be the computing wizard we strive to imitate with machines.

In a stunning new brain map published in Cell, these cells are finally having their time in the spotlight. Led by Dr. H. Sebastian Seung at Princeton University, the original prophet of the connectome, the map captures a tiny chunk of the mouse’s visual cortex, less than 1,000 times smaller than a pea. Yet jam-packed inside the map aren’t just neurons; in a technical tour de force, the team mapped all brain cells, their connections, blood vessels, and even the compartments inside cells that house DNA and produce energy.

All the data is freely available here, for neuroscientists (and you!) to explore.

“A groundbreaking large-scale dataset, and a shining star of open-science, that will support breakthroughs on understanding the neocortex for years to come,” remarked Dr. Eilif Muller at the University of Montreal, who was not involved in the study.

“It’s impossible to overstate the impact functional connectomics will have on neuroscience,” said Dr. David Markowitz, a program manager at IARPA (Intelligence Advanced Research Projects Activity), which funded the study.

MICrONS on Steroids

The new map is part of MICrONS, a highly ambitious project under IARPA and the BRAIN Initiative that’s pursuing the high-risk, high-reward prospect of our era: machines that think like humans.

Launched in 2016 with an initial $100 million budget, the project—Machine Intelligence from Cortical Networks—is betting on reverse engineering algorithms in our brains to power the next generation of AI. Their first goal may sound rather mundane: distill sensory computations, that is, how the brain handles visual data, to amp up a machine’s ability to process, parse, and label images and videos. In the world of increasing digital content (and deepfakes), more efficient visual processing could mean billions.

There’s already been significant wins. In 2021, the project released the largest-ever map of neuronal activity and synapses in the mammalian brain. The map covered 75,000 neurons and over 500 million synaptic connections. It’s a jaw-dropping scale, and the treasure trove of data is still being mined today, as scientists sleuth how form—that is, the location of neurons and synapses—impacts function in the visual cortex.

A Triumph of Automation

With the new map, Seung made the effort broader and far more detailed.

It starts with a healthy young male mouse. Using a dye that lights up as neurons activate, the team recorded hours of his visual cortex activity, getting a glimpse into activated neural networks in real time.

The brain was then carefully sliced into wafer-thin sections under freezing temperatures to preserve all the biological components. At the same time, the team imaged each slice with an electron microscope—a highly powerful tool with a resolution a fraction of the nanoscale, and 4,000,000 times more powerful than our eyes. At that resolution, the team was able to clearly see organelles—little hubs inside cells—such as the nucleus that houses DNA, or the mitochondria that generate energy.

Altogether, the images yielded about eight million objects, including neurons, other brain cells, and fragments.

Then came the hard part: piecing everything back together. The team used a semi-automated method, tapping into previous algorithms to identify different cell types. They further tweaked existing programs to better capture cells and their components, working for over a thousand hours per person.

It paid off. Overall, the team reconstructed over 350 neurons—each with their intricately entwined branches—and over 3.5 million synapses. They further mapped dozens of supporting brain cells, including those that protect the brain’s blood supply and immune cells and those that arm neurons with a protective, non-conductive sheath for faster signal transduction. Add in over 2.4 million mapped mitochondria—the cell’s powerhouse—and the team has built a brain map unlike any ever seen before.

After proofreading, the team said, they had a “highly accurate map of connectivity” between neurons, with nearly 2,000 synapses that “can be used to analyze properties of cortical circuits.”

Why Care?

A map is just boring data if it’s not being used. In several short proof-of-concept experiments, the team dug into the ultimate question: why is the brain so energy efficient?

The first vignette went deep. A neuron, often dubbed the unit of computation in the brain, is far more complex. Each section of a neuron has its own algorithms, supported by local energy producers—the mitochondria. Looking at the new map, the team realized that not all mitochondria are the same, with much longer shapes in the input cables of neurons—also nano-computers by themselves—than the output ones.

Analyzing the map, the team found that the number of mitochondria rose with the number of synapses (where neurons connect with each other), giving support that neurons use to traffic more energy factories to their connectivity “hubs.” In other words, neurons have a built-in supply chain to shuttle energy to top consumer regions.

In another analysis, the team peeked into the connectome of the new map. To their surprise, for a subset of over 100 nearby neurons, the cells linked up just a little over eight percent of them—far sparser connections than previously guessed. When neurons hooked up, they went both ways, with inputs and outputs going both directions. This type of circuit wiring had a massive impact on computation. When the team overlaid neural network activity onto the map, they found that cells with more connections to their neighbors also tended to respond more strongly to visual cues.

They found both “chorister” and “soloist” cells. But from the map, it’s clear that a higher density of connections for a cell—a “chorister”—bolsters its response to visual stimuli.

It’s just a first sneak peek of how the new map leads to insights into our neural computations. Parsing these connection “motifs” can help us better understand why and how the visual cortex shows strong, reliable responses in an ever-changing world—something we can then program into machines.

For now, the team is happy with their massive data dump and the tools they have to help analyze the dataset. Their detailed reconstruction of a part of the brain, including all the nitty-gritty, will no doubt intrigue experts focused on non-neuron brain cells and their function. The map comes at a time when increasing evidence points to the brain’s immune system being involved in normal neural processing and dementia.

Overall, the study’s an intro into a new era of brain mapping. “This paper by the IARPA MICrONS consortium lays the foundation for making such studies routine at the network scale,” said Markowitz. “Truly awesome stuff.”

Can the brain map ‘non-conventional’ geometries (and abstract spaces)?


Grid cells, space-mapping neurons of the entorhinal cortex of rodents, could also work for hyperbolic surfaces. A SISSA study just published in Interface, the journal of the Royal Society, tests a model (a computer simulation) based on mathematical principles, that explains how maps emerge in the brain and shows how these maps adapt to the environment in which the individual develops.

“It took human culture millennia to arrive at a mathematical formulation of non-Euclidean spaces”, comments SISSA neuroscientist Alessandro Treves, “but it’s very likely that our brains could get there long before. In fact, it’s likely that the brain of rodents gets there very naturally every day”.

Treves coordinated a study just published in the journal Interface. Euclidean geometry is the kind of geometry we normally study at school, whereas non-Euclidean geometries are all those that reject one or more of Euclid’s five postulates. A geometry that unfolds on a curved surface is an example. Recent research has investigated how the brain encodes flat spaces. In 2005, Edvard and May-Britt Moser discovered grid cells, neurons of the of rodents that fire in a characteristic way when the animal moves in an arena. The discovery has recently been awarded the Nobel Prize, but all experiments conducted to date have involved flat (Euclidean) surfaces. So what happens with other types of surface?

The starting point is the formation of these brain “maps”. “There are two main classes of theoretical models that attempt to explain it, but both of them assume that our brain contains some kind of “engineer” that has prepared things appropriately” says Treves. “These models take for granted that the system originates with substantial prior knowledge, and they closely reproduce the behaviour of the biological system under known conditions, since they are constructed precisely on its observation. But what happens in conditions that have yet to be explored experimentally? Are these models able to ‘generalize’, that is, to make a genuine prediction to be then confirmed by other experiments? A correct theory should tell us more than what we already know”.

Treves and colleagues have been developing a new, radically different model since 2005, and in their recent paper they have indeed attempted a broad generalization. “Ours is a self-organizing model, which simulates the behaviour of ‘artificial’ grid cells capable of learning by exploring the environment”.

More in detail

The model is based on mathematical rules and its final characteristics are determined by the environment in which it “learns from experience”. In previous studies, the model was tested on flat surfaces: “in these settings our artificial grid cell shows the same hexagonal symmetrical firing pattern seen in biological cells”.

“To apply it to a new situation, we thought of having our model move in a non-Euclidean space, and we chose the simplest setting: a space with a constant curvature, in other words a sphere or pseudosphere”. The recently published study shows the results achieved with the pseudospherical surface, which demonstrate that in this case the firing pattern has a heptagonal, seven-point, symmetry. This finding can now easily be compared with the firing of real grid cells, in rodents raised on a pseudospherical surface. “We’re waiting for the experimental results of our Nobel Prize-winning colleagues from Trondheim” explains Treves. “If our results are confirmed, then new theoretical considerations will ensue that will open up new lines of research”.

In addition to demonstrating that maps adapt to the environment in which the individual develops (and so are not genetically predetermined), the observation of a heptagonal symmetry in new experimental conditions – which would show that the brain is able to encode a non-Euclidean space – would also suggest that grid cells might play a role in mapping many other types of space, “including abstract spaces”, adds Treves. “Try to imagine what we might define as the space of movements, or the space of the different expressions of the human face, or shapes of a specific object, like a car: these are continuous spaces that could be mapped by cells that are not the same but are similar to , cells that could somehow represent the graph paper on which to measure these spaces”.