The connectome for the 302 neurons of the nematode C. elegans was put in charge of a Lego robot. Without any additional programming, the simulated brain started using the robot parts just like the original worm's organs.

"When you think about it, the brain is really nothing more than a collection of electrical signals."

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 11:29 PM

When you think about it, the brain is really nothing more than a collection of electrical signals.

Statements like this make me want to bang my head against a wall. No, it is not. Brain is a collection of neural and glial cells, the role of which we only partially understand. Most of the neurons are connected through various types of chemical synapses, and ignoring their chemical nature would fail to explain the effects of most psychoactive drugs and even hormones. Some of the neurons are linked directly. Some of them are myelinated, while others are not, and this is kinda big deal, since there's no clocking in the nervous system, and the entire outcome of the processing depends on how long it takes for the action potential to propagate through the axon. And how long it takes for the synapse to react. And how long the depolarization persists in the receiving neuron. And all of that is regulated by the chemistry of regulating gene expression patterns. And we're not even talking about learning and forming long-term memories, which are due to neuroplasticity, entirely controlled by gene expression patterns. It's enough to suppress RNA synthesis to cause anterograde amnesia - although it will also cause some retrograde amnesia too., since apparently merely using neurons causes them to change.

Also, C. elegans doesn't even have a brain; it has ganglia.

Look, I understand that this is some interesting research, but calling it "brain uploading" is like comparing the launch of a firework to interstellar travel: essentially, they're the same, but there are couple of nuances.

Agreed this is not brain uploading. Actually this research is not that much different from what has previously been done in computer simulations. The advance is having embedded it in a physical substrate vs a computer.

However, are you implying that C. elegans uploading wouldn't count as uploading because it's so much simpler that a human brain? If so, I disagree with you there. A lot of people think that it would be basically impossible to encode preferences from a C elegans organism (eg learned patterns) into a computer. It certainly hasn't been done yet AFAIK. Doing it would be a conceptual advance and would allow us to tweak our models of how certain types of neurons, electrical synapses, and chemical synapses work, inter alia.

Also, whether you call the C. elegans nervous system a "brain" or a "ganglia" is a question of semantics. Many and perhaps most researchers do call it a brain, see eg here.

My primary concern is that the model is very simplified. Although even on this level it may be interesting to invent a metric for the accuracy of encoding the organism's behavior - from completely random to a complete copy.

I doubt that the experimenters themselves wrote the article. Someone has to popularize science to mere humans

[-][anonymous]9y00

The experimenters never write the popular articles. I've learned not to attack the messenger (too much).

This is not uploading. We've known the connectome of C. elegans for a while: that means we know what the neurons are, what the connections are, and whether they are inhibitory or excitatory. What we don't know, and have no way to read off an existing worm, are the weights for these connections. For this project they assumed every connection has a weight of 1, which does turn out to be enough to do interesting things. But when a C. elegans learns that a given temperature is associated with food it doesn't grow new neurons or connections, it changes its weights. Their model can't do this, so it can't learn.

When we can teach a worm to do X, scan it, run it, and observe that the running worm does X when ones not taught to do X don't, then we've uploaded the worm.

EDIT: expanded this into a post.

It's cool, but I doubt as impressive as it looks. If you connect the inputs and outputs the right way, I bet you could make a car out of a toaster oven controller.

Do you know how much processing power is required to run it in real-time?

In the original article (PDF, free to download after you register) I find:

"The artificial connectome has been extended to a single application written in Python and run on a Raspberry Pi computer."

The original article also links this YouTube video, for those who are interested.

For those of you not familiar with the technology, Python is a programming language not know for speed and the Raspberry Pi is a cheap, low-powered computer smaller than your palm.

For those of you familiar with the technology, this is just another reason why Python is amazing.

Basic Python is very slow, but numerical computing libraries such as Numpy are almost as fast as C, and Cython can compile Python into C if you add in type declarations. (more reasons why Python is awsome!)

I would imagine that you might use numerical computing libraries for neural simulations, so their program might have been running at close to C speeds.

This gives us an upper bound of about 1.5MB ram / 100KFLOPs/10 cents per neuron. Possibly a lot lower.

If you look at the description you find that the model used is very simple and bois down to probably less than N*M*2 machine instructions per cycle (N=number of neurons, here 302, M=average fan in). Because the operation is really only sum and threashold. I can only guess at M but even if we approximate it with N a raspberry pi with ARM Core M 700 MHz should be able to run a nematode connectome at about 4000x its natural speed.

The point here is not the necessary speed but the ease of simulation and visualization of effects.

I don't know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.

The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is not required. I have no idea how the author plans to avoid unintended data loss in the not-unusual case when some UDP packets are dropped. Retransmission (TCP) is not really an option either, as the system has to run in real time.

[-]V_V9y00

If each simulated "neuron" is just a linear threshold unit, as described by the paper, using a whole process to run it and exchange messages by UDP looks like a terribly wasteful architecture.
Maybe the author wants to eventually implement a computationally expensive biologically accurate neuron model, but still I don't see the point of this architecture, as even if the individual neurons were biologically accurate, the overall simulation wouldn't, due to the non-deterministc delays and packet lossess introduced by UDP messaging.

I'm unimpressed.

[-][anonymous]9y00

I discussed this with a professor of neuroscience on Facebook.

[This comment is no longer endorsed by its author]Reply
[-]ike9y00

Good news: immortality has been achieved. Bad news: only works on worms for now.

I wonder if this has been open-sourced? Wouldn't it be cool to play a game with worms where the worms are controlled by their actual brain patterns? (Forgetting about the problem with torturing computer uploads for now, of course.)