Seeing the kind of utter bullshit that ANNs tolerate routinely makes me kind of bullish on "imperfect uploads".
If ANNs can degrade gracefully on perturbations like pruning, quantization, noise injection, model surgeries and more, then, what does that tell us of the robustness of biological NNs - networks that are, by their very nature, optimized to run in noisier, less perfect environments than that of deterministic silicon?
From what I've seen on the biological end, there are also hints that brains have their own scaling patterns - and the more "complex" you go, the more of the overall behavior is driven by topology - local and global connectivity - rather than hardwired specialized behaviors of singular neurons.
When you run 100 neurons total, each neuron is a specialized unit doing a specific thing, and it's absolutely vital to get the specific neurons right to recreate behavior. When you run 100 000 neurons, neurons themselves become far more generic and interchangeable, and the behavior becomes far more connectome-driven. "Identify every single neuron type and characterize the behavior of each type extensively" is vital on one end of the spectrum, but may be "extra credit" on the other. Unprincipled "take 120 pre-made neuron models and brute force through them to find the combinations that seem to fit a few recorded patterns best" might get most of the way there, and much faster.
It seems likely that this trend would continue onwards, into millions and billions. Which bodes very well for those more connectome-centric "assume simplified neurons" approaches. More so when paired with the likely perturbation resistance.
I look favorably at the "don't chase perfection, chase integration and scale" approach in this demo because of it. I get why it's controversial - I just think the tradeoffs they made are quite sensible. Demoing obviously imperfect and incomplete but "good enough that it looks biologically plausible" behavior in a sim beats going for perfection a decade down the line, in my eyes. And the field does deserve more attention than it's getting.
It does seem likely that bio brains are pretty robust to perturbation, but quantization produces mostly-independent noise. a structural difference across the entire model can produce potentially large systemic behavior differences. it only takes maybe 1ug lsd in the brain (out of a 100ug oral dose) to amplify into a huge difference. I asked claude to estimate and was told that's an average of 10 molecules per synapse! a small out-of-distribution signaling difference and the entire thing is in a different kind of attractor. if you can be sure your lossiness is independent, you're more likely to make use of the incredible redundancy. if you don't know about a critical signaling pathway, then everything works within some regime and then breaks as soon as that signaling pathway is hit. and you have to be able to detect when that signaling pathway was important to know if you succeeded. Which means you need some sort of high bandwidth inspection to see if the dynamics are different under the conditions you care about.
also, there are known to be some systems in large brains that depend on relatively few neurons
This seems to me like a soft constraint problem at its core.
"How many little pins do you need to pin down the behavior of this system and force it into some subspace that's reasonably close to the solution?"
I.e. for the LSD example: it's not a "large scale" phenomenon. LSD's effects are already prominent in tiny neuron populations "in vitro". It's not a "tiny almost imperceptible error that compounds to become catastrophic at scale" - it's "already a big error that simply scales upwards well".
Which means: you can have a "tiny neuron population in vitro" reference, and that alone will expose the "we made every single neuron in the brain sim act like it's on LSD" failure mode. Many others like it too.
How many little pins? How many imposed constraints does it take to bleed off the bulk of major systematic failures? How many constraints does it take to walk away from "catastrophic instability" and enter the "mostly convergent" subspace, where innate perturbation resistance begins to work in our favor? How many of those constraints can be imposed with relatively straightforward means, long before you need to collect large bodies of new data and build specialized tooling like high bandwidth inspection?
The only real way to figure out is to go for it, and see what works and what fails. It's an empirical problem, there's no way around it. You can only test your assumptions if you make them first.
Thanks for writing this up! This is what I thought was most likely, but hadn't had time to look into it.
The parallels to 12y ago when people made the same claims about nematodes are depressing.
Why does everybody just ignore that a few weeks earlier, Chinese researchers actually turned the connectome into a graph neural network?
They use imitational learning to train the neural network, which is literally the fly's connectome, to control the fly's body in the MoJoCo?
Here is the Reddit post in Russian, here - Telegram post in English
Isn't it the much bigger deal than EON hype?
Thank you for this post, it makes things clearer. When I first saw Eon's post on X, I was uncertain whether a complete connectomic map was enough to generate faithful physical simulations of any living organism. To me, at first, it seemed that it would essentially require a list of the basic computational rules (like ring attractor dynamics) in addition to the connectomics to generate any meaningful behaviour that has not been programmatically hardcoded. But in my opinion, even when those basic computational rules are known, a significant gap to 'upload' the fly to a machine might remain, as these rules and meta-rules are themselves governed by physical world interactions between millions of components. To bridge this gap, simulations at different levels of biophysics would be required. For instance, even a relatively simple computation in vision, "opponency", requires the knowledge of biophysical phenonemon from the synaptic dynamics to receptor-level activity.
At last, this I think, essentially relates to the debate of biological reductionism: how deep do we have to go to write the equation of life?
If it's really this difficult, the first humans won't even bother uploading themselves through neuronal scans, but through extremely indirect (but still unimaginably complex) text and behavior based reverse engineering simulations.
The question is: when an imperfect human upload gets imperfect enough to cause misalignment and/or critical capabilities deterioration? What happens first of these two? Can we know in advance, perhaps using animal upload experiments?
In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”
On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’.
At first glance, this appears astounding - a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video’s initial release on X by cofounder Alex Wissner-Gross, Eon’s CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”.
Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg’s post. If Eon had indeed succeeded in uploading a fly - a goal previously thought to be likely decades away according to much of the fly neuroscience community - they’d need more than a video to prove it.
Did the upload show evidence of known neurophysiological markers of working memory, such as the head-direction ring attractor bump? How did their brain model actually control the virtual fly body, given it seemed to lack a modeled spinal cord? Where was the data and the write-up?
Because if Eon couldn’t back up what their video seemed to show, at least some neuroscientists were going to be markedly less than impressed:
Eon did follow up with a blog post - How the Eon Team Produced A Virtual Embodied Fly - detailing how they combined pre-existing models of the fly brain and body into a system that could respond to virtual environmental cues. But for the neuroscientists scrutinising the uploading claim, these details only sharpened their objections - so much so that some are accusing Eon of misleading conduct and gross misrepresentation.
To understand just why these scientists are so upset, you need a bit of context.
A brief history of fruit fly connectomics
The fruit fly Drosophila melanogaster has been a workhorse of neuroscience for decades; its brain small enough to be tractable but complex enough to produce genuinely interesting behaviour such as learning, navigation, decision-making, and courtship. A long-running ambition within the community has been to map the complete wiring diagram - a ‘connectome’ - of that brain, and in October 2024, after years of incremental progress, the FlyWire Consortium achieved it: a complete connectome of the adult fly brain, documenting all 139,255 neurons and over 50 million synaptic connections.
These increasingly complete connectomes have enabled the creation of increasingly elaborate computational models. In 2024, Shiu et al. published a model of the entire adult fly brain in which every neuron and neural connection was represented, albeit in highly simplified form (ignoring differences in cell shape, neurotransmitter dynamics, and much else). Despite these simplifications, the model could predict which neurons activate in response to sensory stimuli and identify pathways underlying behaviors like feeding and grooming, a striking demonstration that wiring alone carries substantial information about function. Separately, Lappalainen et al. built a ‘connectome-constrained’ model of the fly’s visual system, whose predictions matched real neural recordings across dozens of experiments.
Meanwhile, other researchers had built NeuroMechFly, a biomechanical simulation of the adult fly body based on micro-CT scans of real anatomy. Updated to a second version in late 2024, the new virtual fly body could walk, groom, or be trained via reinforcement learning to navigate through virtual environments. Crucially, it could also be reprogrammed to be driven by any other kind of external controller.
One of the videos in the NeuroMechFly v2 publication, demonstrating a ‘hierarchical sensorimotor task in [a] closed loop’. There’s no connectome involved here, yet it is still remarkably similar behavior to the Eon demo.
By early 2025, the pieces Eon needed for their demo were largely in place: a complete brain connectome, computational models of both the central brain and the visual system, and a detailed biomechanical body model. All that remained was to wire them together.
So, what did Eon actually do?
Eon took the pre-existing components we just described - the Shiu et al. brain model and the NeuroMechFly v2 body - and connected them together into a closed loop: sensory events in a virtual world feed into the brain model, and selected outputs from the brain model direct the virtual body.
The loop has four steps. First, something happens in the virtual environment - the fly’s leg contacts a sugar source, or dust accumulates on its antennae - and these events activate specific sensory neurons in the brain model. Second, the brain model runs for a 15-millisecond time step, propagating activity through the connectome’s ~140,000 simplified digital neurons. Third, Eon reads out the activity of a small, hand-picked set of descending neurons and translates it into high-level commands - turn left, walk forward, groom, feed - that are passed to pre-trained motor controllers in the body model. Fourth, the body moves, changing what the fly senses, and the loop repeats.
The result is the video that went viral. But the behaviors on screen are less impressive than they appear, because the brain model is doing far less of the work than a viewer would naturally assume.
Take the walking. The brain model does not orchestrate the fly’s legs. It doesn’t compute the gait cycle, coordinate the six limbs, or position the joints. It activates a few descending neurons - oDN1 for forward velocity, DNa01/DNa02 for steering - and hands that signal off to a locomotion controller within NeuroMechFly that already knows how to walk. The brain is issuing something like a “go forward” or “turn left” instruction; the body model handles everything else. In a biological fly, the detailed work of translating such commands into coordinated leg movements is performed by ~15,000 neurons in the ventral nerve cord (the fly’s equivalent of a spinal cord), none of which are simulated here. The same applies to grooming: the connectome selects the behavior, but NeuroMechFly’s controllers execute it.
In their blog post, Eon are open about this. They compare the descending neurons to a car’s steering wheel, accelerator, and brake - you can predict what the car will do from these controls “without explicitly simulating every combustion event inside the engine.” They also acknowledge that the visual system activity displayed so prominently in the video - derived from the Lappalainen model - is “somewhat decorative” and does not substantially drive behavior. They do note that the brain-body mappings are in some cases “somewhat arbitrarily chosen by hand.” And they explicitly state the work “should not yet be interpreted as a proof that structure alone is sufficient to recover the entire behavioral repertoire of the fly.”
This is fair enough, and their efforts to connect brain and body models are genuinely useful engineering. If Eon had described this as “the first integration of connectome-constrained brain and body models into a closed sensorimotor loop”, nobody in the fly neuroscience community would have objected.
But they didn’t say that. They said “We’ve uploaded a fruit fly.” Transparency in a blog post that few will read doesn’t undo a headline that millions saw. The typical person who encounters a claim on X, watches the video, and sees a fly walking, grooming, and feeding while a digital brain flickers alongside it is probably not going to think “a simplified brain model is selecting from a small menu of pre-programmed behaviors via a hand-tuned interface.” They’re likely to think the fly has been faithfully recreated inside a computer.
It hasn’t. Eon’s virtual fly implements only a handful of behaviors, and those rely heavily on NeuroMechFly’s pre-trained controllers rather than on the connectome. This is the most fundamental problem with the demo as evidence of an upload: because the body model already knows how to walk, groom, and feed, almost any signal that triggers the right controller at the right time will produce fly-like behavior on screen. You could replace the connectome with a simple rule-based script - if dust, groom; if sugar, feed; otherwise, walk forward - and the resulting video would look much the same. The fly-like behavior the viewer sees is a product of the body model, not the brain. The digitized connectome may be producing meaningful internal dynamics, but this demo cannot tell us whether it is.
What would actually count as uploading a fly?
So if what Eon built isn’t an upload, what would be?
The word ‘upload’ carries a claim that ‘model’ and ‘simulation’ do not. When one says they’ve modeled or simulated a fly, they’re saying they’ve captured some elements of the original insect’s behaviour, but with significant simplifications and assumptions. If instead they say they’ve uploaded a fly, they’re making a claim about the fly itself: that its identity has been faithfully transferred into a new medium, that the thing in the computer in some sense is the fly, just running on a different substrate. When you upload a photo, the file on your computer is the photo. Nobody says “I’ve partially uploaded this photo” to mean “I’ve made a rough sketch inspired by it.”
An uploaded fly, then, should be able to do everything the original fly could do. It should be playable forward in time indefinitely, responding to novel situations as the original would have. It should serve as a faithful proxy for the real thing; so much so that a neuroscientist could peer inside, observe realistic equivalents of neurophysiology, and run experiments that would be impractical or impossible on a biological fly, with confidence that the results would generalise back.
The leading proposal for how to actually achieve this is whole brain emulation: faithfully recreating the brain’s causal mechanisms at whatever level of detail turns out to be necessary so that the digital system behaves identically to the original. This is what distinguishes emulation from simulation. A weather simulation is useful - it can predict next week’s temperature with reasonable accuracy - but it breaks down when pushed further out, because its approximations are coarser than the actual atmospheric processes of real weather. In contrast, one can run an emulation of the Nintendo 64 game Banjo-Kazooie on a laptop, and because the emulator faithfully recreates the logic of the N64’s hardware - the processor, the memory, the graphics pipeline - the game will never fail to behave as it would have on the original console.
It’s currently an open scientific question what level of biological detail an emulation needs to capture. It’s unlikely we’d need to simulate every ion channel, and perhaps much of the brain’s physiology could be simplified with no consequence. But the key feature of the emulation approach is the guarantee: if you’ve faithfully recreated the causal mechanisms down to the necessary level, the resulting behaviour is trustworthy by construction. Low-fidelity approaches might produce correct-looking behavior in some cases, but it’s hard to tell to what degree this will generalise to novel situations.
In response to this line of criticism, Michael Andregg has argued that uploading shouldn’t be considered so binary. “I don’t think of uploading as a binary concept” he told The Verge, outlining “different levels” of upload. By this logic then, Eon’s system - containing connectome-derived elements driving behavior in a virtual body - might qualify as a ‘partial upload’.
But if a connectome-constrained model can count as a ‘partial upload’, then the Shiu et al., brain model was already a partial upload before Eon touched it. So was the Lappalainen visual model. So, for that matter, is any computational neuroscience model that incorporates anatomical connectivity data. The word ‘upload’ loses its distinctive meaning, and the field loses its ability to communicate what it is actually trying to achieve and how far away a true fly upload still is.
Still loading
When the vocabulary of breakthroughs is spent on incremental demos, the actual breakthroughs are cheapened when they arrive. Funders and the public lose the ability to distinguish genuine milestones from slick demos, and investment flows towards groups making the boldest claims rather than those doing the most foundational work. Worse, for a field that is struggling to graduate from science fiction to serious research, premature claims risk triggering the cycle of hype and disillusionment that has set back other ambitious programs before.
To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines - not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”
But the history of science - and the gap between what Eon demonstrated and what uploading actually requires - suggests that there is likely no shortcut through the long slog ahead.
Because in all probability, before anyone can truthfully claim to have uploaded a fly, there will still need to be years more of tedious work. Countless painstaking patch clamping experiments of carefully guiding a glass electrode into a single neuron while keeping it alive, just to learn how that one cell type, out of the fly brain’s thousands, transforms its inputs into outputs. Endless sessions of pinning flies under two-photon microscopes, collecting calcium imaging data while the animals walk or groom or navigate an odor plume, slowly building up ground-truth measurements of what real brain activity actually looks like during real behavior. Thousands of hours still to come of building computational models, testing them against that data, failing, and refining them again.
Then, and very likely only then, will there come a day when someone will hit ‘run’, and a fly - disoriented in whatever way a fly can be, having been sitting in a vial a moment ago - will find itself somewhere unfamiliar. It won’t know that in the intervening time, it had been anesthetised, embedded in resin, and its brain sliced into thousands of thin sections. It won’t know that those sections were painstakingly imaged, or that its neural architecture was reconstructed from those images, or that thousands of its fellow flies were studied and sacrificed to fill in what images alone couldn’t tell us. It won’t know of the billions of dollars and thousands of careers that it took to reach this point, or the millions of hours spent staring down microscopes, handling vials, and debugging code. It will certainly never know that it was once made of proteins and cells, and is now made of silicon and mathematics.
It will just beat its wings, lift off, and search for fruit.