Based on this and you other comment you seem to be pro GEVI instead of patch clamp, am I correct? Assuming GEVIs were used (or some other, better technology) to find all electrophysiology, why would that be a waste of time? Even if we can get by with a few thousand template neurons and individual tuning is not necessary (which seems to be the view of Steven Byrne and maybe you) how should we go about getting those template neurons without a lot of research into correlating morphology, genetic expression, and electrophysiology? If we don't need them, why would we not? My primary goal is not to defend my plan, I just care about making progress on WBE generally and I would like to hear specific plans if others have them. Studying single cell function just seemed to be the most natural to me. Without that, studying how multiple neurons signal each other or change over time or encode information in spike trains seems like putting the cart before the horse as it were. Again, very glad to be wrong, it just still seems to me that some version of this research has to be done eventually, we haven't done it yet AFAIK, so I should start on what little part I can.
Yes, I am familiar with the sleep = death argument. I really don't have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don't believe in any of these but I don't have any real arguments for them and I don't think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don't fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don't have much to say other than it was very thought provoking and I'm still going to try what I said in my post. Even if it doesn't come to complete fruition I hope it will be relevant experience for when I apply to grad school.
I think this is general admirable in theory, at least broad strokes, but way way harder than you anticipate. The last project I worked on alone I was trying to copy c elegans chemotaxis with a biological neuron model and then have it remember where food was from a previous run and steer in that direction even if there was no food anywhere in its virtual arena, something real c elegans has been observed doing. Even the first part was not a huge success and because of that I put an indefinte pause on the second part. I would love to see you carry on the project or something similar, maybe you will have more success especially if you abstract more. I'm happy to share code and talk more if you're interested. But at this time, it is my impression that we just don't understand individual neurons, synaptic weight, or learning rules well enough to take a good pass at it.
First of all, I hate analogies in general but that's a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you'll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm's law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain's algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won't be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I'm not especially interested. Yes, that probably makes me quite selfish.
Not my claim so I'm not defending this too hard but from my lab experience relatively few genes seem to control bulk properties and then there are a whole bunch of higher order corrections. Literally one or two genes being on/off can determine if a neuron is excitatory or inhibitory. If you subscribe to Izhikevich's classification of bistable/monostable and integrator/resonator you would only need 3 genes with binary expression. After that you get a few more to determine time constants and stuff. I still think whole transcriptome would be helpful, especially as we don't know what each gene does yet, but I am not 100% against the idea that only ~20 really matter with a few thousand template neurons and after that you run into a practical limit of noise being present.
I apologize for my sloppy language, "computationally simple" was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion.
In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whatever learning rule changes that it might have. But, to test this they have crappy point neuron models implementing LIF and the synapses are just a constant conductance or something, and then rules on top of that that can adjust parameters (membrane capacitance, resting potential, synaptic conductance, ect.) and it fails to replicate observables. Obviously this is an extreme example, but I just want better neuron models so nothing like this ever has the chance to happen.
Basically, if we can't model an organoid we could
Three is obviously a bad plan. Two is really really hard. One should be relatively easy provided we have a reasonable threshold of what we consider to be accurate electrophysiology. We could have good biophysical models that recreate it or we could have recurrent neural nets modeling the input current -> membrane voltage relation of each neuron. It just seems like an easy way to cross of a potential cause of failure (famous last words I'm sure).
As for you business logic point, it is valid but I am worried that black boxing that too much would lead to collateral damage. I am not sure if that's what you meant when you said spiking neural networks are the wrong starting point. In any case, I would like higher order thinking to stay as a function of spiking neurons even if things like reflexes and basal behavior can be replaced without loss.
I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don't. I guess technically I'm agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecting it are 90% accurate to real synapses, does that recover 90% of brain function? Is the last 10% something that is computationally irrelevant and can just be abstracted away, giving you effectively 100% functionality? Is 90% accuracy for single neurons magnified until the real accuracy is like 0.9^(80 billion)? I think it is unlikely that it is that bad, but I really don't know because of the abject failure to upload anything as you point out. I am bracing myself for a world where we need a lot of data.
Let's assume for the moment though that HH model with suitable electrical and chemical synapses would be sufficient to capture WBE. What I still really want to see is a paper saying "we look at x,y,z properties of neurons that can be measured post mortem and predict a,b,c properties of those neurons by tuning capacitance and conductance and resting potential in the HH model. Our model is P% accurate when looking at patch clamp experiments." In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results. I'm not sure if anyone has the ability to answer these kinds of questions because we are still just so bad at emulating anything.
Edit:
Also, I am not sure if you're proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
Just to be abundantly clear, my main argument in the post is not "Single cell transcriptomics leading to perfect electrophysiology is essential for whole brain emulation and anything less than that is doomed to fail." It is closer to "I have not seen a well developed theory that can predict even a single cell's electrophysiology given things we can measure post mortem, so we should really research that if we care about whole brain emulation. If it already exists, please tell me about it."
I think you make good points when you point out failures of c. elegans uploading and other computational neuroscience failures. To me, it makes a lot of sense to copy single cells as close as possible and then start modeling learning rules and synaptic conductance and what not. If we find out later a certain feature of a neuron model can be abstracted away, that's great. But a lot of what I see right now is people running to study learning rules and they use homogenous leaky integrate and fire neurons. In my mind they are doing machine learning on spiking neural networks, not computational neuroscience. I don't know how relevant that particular critique is but it has been a frustration of mine for a while.
I am still very new to this whole field, I hope that cleared things up. If it did not, I apologize.
Honestly, I'm not sure. I read about the biosphere 2 experiments a while ago and they pretty much failed to make a self sustaining colony with only a few people and way more mass than we could practically get into space. I really want us as a species to keep working on that so we can solve any potential problems in parallel with our development of rockets or other launch systems. I could see a space race esque push getting us there in under a decade but there currently isn't any political or commercial motivation to do that. I don't know if it would necessarily need a military. I could easily be very wrong but there's so much space in space and so much stuff on earth trying to conquer a habitat with a few thousand people on it seems a little unnecessary. Italy won't take over Vatican city, not because they can't but because there really isn't a good reason to. As for political freedom, that's the most speculative of all as I understand it less than the technology. My intuition is that they could simply because a self sustaining colony doesn't need to produce any surplus a government would be interested in taxing. If you set up an asteroid mining operation I can see all the governments wanting to take a cut of the profits but if all you wanted was to get away from an implicit surveillance state it would have to be truly dystopian to keep you from leaving. As long as you don't launch anything dangerous toward Earth and you aren't growing exponentially to the point where you might rival the power of a country and you aren't engaging in incredibly lucrative trade, the only motivation left to govern you would be control for control's sake and I guess I'm just optimistic enough to think that there will always be at least one place on earth with high tech that isn't that dystopian.
To my knowledge, the most recent c. elegans model was all the way back in 2009
it is this PhD thesis which I admit I have not read in its entirety.
I found on the OpenWorm history page which is looking rather sparse unfortunately.
I was trying to go through everything they have, but again, was very disillusioned after trying to fully replicate + expand this paper on chemotaxis. You can read more about what I did here on my personal site. It's pretty lengthy so the TL;DR is that I tried to convert his highly idealized model back into explicit neuron models and it just didn't really work. Explicitly modeling c elegans in any capacity would be a great project because there is so much published, you can copy others and fill in details or abstract as you wish. There is even an OpenWorm slack but I don't remember how to join + it's relatively inactive.
That is more than enough stuff to keep you busy but if you want to hear me complain about learning rules read on.
I am really frustrated with learning rules for a couple reasons. The biggest one is that researchers just don't seem to have very much follow through on the obvious next steps. either that or I'm really bad at finding/reading papers. In any case, what I would love to work on/read about a learning algorithm that
From what I can tell, many papers address one or two of these but fail to capture everything. Maybe I'm being too greedy, but I feel like this list is pretty sensible for a minimum of whatever learning algorithms are at play in the brain.
I am going to work on the project I outline here but I would genuinely love to help you even if it's just bouncing ideas off me. Be warned, I also am not formally trained in a lot of neuroscience so take everything I say with a heap of salt.