Epistemic status: I probably don't know what I am talking about 50% of the time. Specifically, I expect that I am missing some bottlenecks that would be obvious to a neuroscientist.

Song Version

Neuralink has a surgical robot to insert electrodes in your brain. They seem to try to target single neurons already.

Their interface unit has 1024 electrodes.

How about we just insert 100,000,000,000 electrodes instead? One for each neuron.

We want to evaluate: "Could you upload yourself by recording the activations of all your neurons for a year, and then train a giant Neural Network to predict the neuronal spikes, therefore simulating you?"

There are probably issues about "the brain just dies because it has 100,000,000,000 wires inside". Also, the problem of precisely hitting a single neuron without hitting a blood vessel doesn't become easier when there are literally billions of wires that you could entangle yourself with. And as I am not a brain surgeon I probably can't even see the real bottlenecks.

But let's ignore all that for now.

Today let's just focus on one problem: SPEED. Specifically how long it takes to wire you up. The current surgical robot is terribly slow. In this video you can follow using these timestamps:

  • 01:45.272 (start to move to insertion point)
  • 01:45.306 (insertion point reached)
  • 01:45.339 (starting insertion)
  • 01:45.439 (insertion complete)
  • 01:59.252 (starting the next insertion)

The important thing here is that it takes 33ms to insert an electrode once we are already in place. Obviously doing this with a 30fps video is very inaccurate. So as a lower bound let's say it takes 5ms.

So that's 15 years to wire you up! At 200 incisions per second.

5ms * 100,000,000,000 = 500,000,000,000ms

500_000_000_000 / 1000 / 60 / 60 / 24 / 365 = 15.85

And this is with best-case estimates. The actual time of the robot from incision to incision is over 13 seconds. So instead of 200 incisions per second, we have 0.075 incisions per second (the video is from 01.12.2022). However, I'd guess that most of the auxiliary robot activity, like fetching an electrode, can be sped up significantly. The incision itself seems harder to speed up.

Let's continue with the best guess estimate. How many needles can be inserted at the same time without blowing the brain up? Well, that's actually not the issue. Deformation is. As you can see from the video the insertion is quite forceful. So by default, we can't make a second incision, at the same time, in the same area, because tissue in that area will be heavily deformed from the first incision.

But if we stab the brain far enough away, then the deformation would be manageable. I am close to just making numbers up now, as I used the following whiteboard drawing to compute that we can make 122 incisions at the same time, without having to worry about deformation.

That's 47 days to wire you up. At 24400 incisions per second.

We could push further. We could try to make multiple incisions close to one another. We could model all the deformational forces, and then use that model to calculate how to make the incision. I'd guess that his problem is easy conceptually, but hard to get right in practice.

I suspect that Neurallink has already solved a simple version of this problem. A single incision already causes deformation. So maybe you already need a deformational model for a single incision.

Though once you worry about deformation you probably just want to build a new robot. One that is optimized to cause minimal deformation. I don't expect that they optimized for minimizing deformation so far. Well, they haven't optimized for inserting 100,000,000,000 electrodes in general.

So I'd tentatively guess that there is some big room for improvement there.

47 days is still pretty terrible, especially considering that these are best-case estimates. But it's not something like 100,000 years for the best case, which wasn't apriori clear to me.

So while there are gaping technical challenges, it seems not-defnetly-impossible that you could hook yourself up before you die of old age. 

Let's imagine a fictional world where somebody really competent, tries to develop a fast enough robot surgeon. Even if they had "infinite money", I'd guess that it would take multiple years at a minimum.

But still, let's imagine that you managed to hook yourself up. Can you now upload yourself?

New Comment
6 comments, sorted by Click to highlight new comments since:

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

The most basic analogy between artificial and real neurons involves how they handle incoming information. Both kinds of neurons receive incoming signals and, based on that information, decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is far more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by a biological neuron’s long treelike branches, called dendrites, and the neuron’s decision to send out a signal.

This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

Absolute napkin math while I'm sleep deprived at the hospital, but you're looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That's.. A lot.

Now, I am a doctor, but I'm certainly no neurosurgeon. That being said, I'm not sure it's particularly conducive to the functioning of a human brain to stuff it full of metallic wires. Leaving aside that Neuralink and co are very superficial and don't penetrate particularly deep into the cortex (do they even have to? Idk, the grey matter is on the outside anyway), it strikes me as electrical engineer's nightmare to even remotely get this wired up and working. The crosstalk. The sheer disruption to homeostasis..

If I had to bet on mind uploading, the first step would be creating an AGI. To make that no longer my headache, of course.

Not an option? Eh, I'd look for significantly more lossy options than to hook up every neuron. I think it would be far easier to feed behavioral and observational data alongside tamer BCIs to train a far more tractable in terms of size model to mimic me, to a degree indistinguishable for a (blinded) outside observer. It certainly beats being the world's Literal Worst MRI Candidate, and probably won't kill you outright. I'm not sure the brain will be remotely close to functional by the time you're done skewering it like that, which makes me assume the data you end up collecting any significant degree into the process will be garbage from dying neuronal tissue.

People have been trying to do this with a tiny worm. That project has not yet been successful.

No they just got the connectdome afaik. This is completely different. gives you no information about the relation between the different neurons in terms of their firing.

(epistemic status: same as the post, I don't know neuroscience)

I mis-heard the song version and had a different interpretation of something, but it actually seems like a good idea to consider so here it is :). 

Remove the assumption that the biological version of oneself needs to survive the process. Then, perform very quick scan of all the neurons on a very short timescale without worrying about the brain being destroyed afterwards, only about accurately scanning all the neurons before that happens. Would this data of the whole brain for a very short time (maybe some fraction of a second) be enough to digitally reconstruct the mind and run it?

Maybe it wouldn't be because of neuroscience reasons I wouldn't know, or 'neural spikes over time' not being predictable from the very-short-timespan data (again for some reason I wouldn't know). Also, maybe such a fast scan is not physically possible with current technology (I'd guess that something more efficient than inserting 100 billion wires would be needed).

But if it were possible and feasible, I think it would be worth it, the world's at stake after all. I'd volunteer.

I don't know if this is possible conditional on you having some brain scan data. 

I think I could build a much better model of it. The backstory of this post is that I wanted to think about exactly this problem. But then realized that maybe it does not make any sense because it's just not technically feasible to get the data. 

After writing the post I updated that. I am now a bit more pessimistic than the post might suggest actually. So I probably won't think about this particular way to upload yourself for a while.

This post happens to be an example of limiting-case analysis, and I think it's one of the most generally usefwl Manual Cognitive Algorithms I know of. I'm not sure about its optimal scope, but TAP:

  • WHEN: I ask a question like "what happens to a complex system if I tweak this variable?" and I'm confused about how to even think about it (maybe because working-memory is overtaxed)…
  • THEN: Consider applying limiting-case analysis on it.
    • That is, set the variable in question to its maximum or lowest value, and gain clarity over either or both of those cases manually. If that succeeds, then it's usually easier to extrapolate from those examples to understand what's going on wrt to the full range of the variable.

I think it's a usefwl heuristic tool, and it's helped me with more than one paradox.[1] I also often use "multiplex-case analysis" (or maybe call it "entropic-case"), which I gave a better explanation of in the this comment.

  1. ^

    A simple example where I explicitly used it was when I was trying to grok the (badly named) Friendship paradox, but there are many more such cases.