All of Dom Polsinelli's Comments + Replies

To my knowledge, the most recent c. elegans model was all the way back in 2009 

it is this PhD thesis which I admit I have not read in its entirety. 

I found on the OpenWorm history page which is looking rather sparse unfortunately. 

I was trying to go through everything they have, but again, was very disillusioned after trying to fully replicate + expand this paper on chemotaxis. You can read more about what I did here on my personal site. It's pretty lengthy so the TL;DR is that I tried to convert his highly idealized model back into explicit... (read more)

Based on this and you other comment you seem to be pro GEVI instead of patch clamp, am I correct? Assuming GEVIs were used (or some other, better technology) to find all electrophysiology, why would that be a waste of time? Even if we can get by with a few thousand template neurons and individual tuning is not necessary (which seems to be the view of Steven Byrne and maybe you) how should we go about getting those template neurons without a lot of research into correlating morphology, genetic expression, and electrophysiology? If we don't need them, why wo... (read more)

Yes, I am familiar with the sleep = death argument. I really don't have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don't believe in any of these but I don't have any real arguments for them and I don't think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don't fear gradua... (read more)

I think this is general admirable in theory, at least broad strokes, but way way harder than you anticipate. The last project I worked on alone I was trying to copy c elegans chemotaxis with a biological neuron model and then have it remember where food was from a previous run and steer in that direction even if there was no food anywhere in its virtual arena, something real c elegans has been observed doing. Even the first part was not a huge success and because of that I put an indefinte pause on the second part. I would love to see you carry on the proj... (read more)

1hpcfung
Very interesting, thank you for letting me know. I kind of expected that this is where we are right now, I am still catching up with the literature. So even though we have the complete C. elegans connectome, this is not enough? (As you said, we don't understand individual neurons, synaptic weight, or learning rules well enough.) A quick search seems to show that the relevant sensorimotor circuits have been studied before. Is it not possible to model these directly? https://pmc.ncbi.nlm.nih.gov/articles/PMC4082684/ If not, perhaps starting with an organism that is even simpler than C. elegans would help.

First of all, I hate analogies in general but that's a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you'll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you nev... (read more)

2Steven Byrnes
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”? …It’s fine if you don’t want to keep talking about this. I just couldn’t resist.  :-P I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts. By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.

Not my claim so I'm not defending this too hard but from my lab experience relatively few genes seem to control bulk properties and then there are a whole bunch of higher order corrections. Literally one or two genes being on/off can determine if a neuron is excitatory or inhibitory. If you subscribe to Izhikevich's classification of bistable/monostable and integrator/resonator you would only need 3 genes with binary expression. After that you get a few more to determine time constants and stuff. I still think whole transcriptome would be helpful, especial... (read more)

I apologize for my sloppy language, "computationally simple" was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion. 

In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whateve... (read more)

3Steven Byrnes
Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity.  :) Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits. (There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.) There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing. Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual. When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons.  The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here. “

I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don't. I guess technically I'm agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecti... (read more)

2Steven Byrnes
I guess I would phrase it as “there’s a useful thing that neurons are doing to contribute to the brain algorithm, and that thing constitutes a tiny fraction of the full complexity of a real-world neuron”. (I would say the same thing about MOSFETs. Again, here’s how to model a MOSFET, it’s a horrific mess. Is a MOSFET “fundamentally computationally simple”? Maybe?—I’m not sure exactly what that means. I’d say it does a useful thing in the context of an integrated circuit, and that useful thing is pretty simple. The trick is, “the useful thing that a neuron is doing to contribute to the brain algorithm” is not something you can figure out by studying the neuron, just as “the useful thing that a MOSFET is doing to contribute to IC function” is not something you can figure out by studying the MOSFET. There’s no such thing as “Our model is P% accurate” if you don’t know what phenomenon you’re trying to capture. If you model the MOSFET as a cartoon switch, that model will be extremely inaccurate along all kinds of axes—for example, its thermal coefficients will be wrong by 100%. But that doesn’t matter because the cartoon switch model is accurate along the one axis that matters for IC functioning. The brain is generally pretty noise-tolerant. Indeed, if one of your neurons dies altogether, “you are still you” in the ways that matter. But a dead neuron is a 0% accurate model of a live neuron. ¯\_(ツ)_/¯ Just because every part of the brain has neurons and synapses doesn’t mean every part of the brain is a “spiking neural network” with the connotation that that term has in ML, i.e. a learning algorithm. The brain also needs (what I call) “business logic”—just as every ML github repository has tons of code that is not the learning algorithm itself. I think that the low-thousands of different neuron types are playing quite different roles in quite different parts of the brain algorithm, and that studying “spiking neural networks” is the wrong starting point.
  1. Thank you for that article, I don't know how it didn't come up when I was researching this. Others finding papers I should have been able to find alone is a continuous frustrations of mine.
  2. I would love to live in a world where we have a few thousand template neurons and can just put them together based on a few easily identifiable factors (~3-10 genes, morphology, brain region) but until I find a paper that convincingly recreates the electrophysiology based on those things I have to entertain the idea that somewhere between 10 and 10^5 are relevant. I wou
... (read more)
4Steven Byrnes
I think the genome builds a brain algorithm, and the brain algorithm (like practically every algorithm in your CS textbook) includes a number of persistent variables that are occasionally updated in such-and-such way under such-and-such circumstance. Those variables correspond to what the neuro people call plasticity—synaptic plasticity, gene expression plasticity, whatever. Some such occasionally-updated variables are parameters in within-lifetime learning algorithms that are part of the brain algorithm (akin to ML weights). Other such variables are not, instead they’re just essentially counter variables or whatever (see §2.3.3 here). The “understanding the brain algorithm” research program would be figuring out what the brain algorithm is, how and why it works, and thus (as a special case) what are the exact set of “persistent variables that are occasionally updated”, and how are they stored in the brain. If you complete this research program, you get brain-like AGI, but you can’t upload any particular adult human. Then a different research program is: take an adult human brain, and go in with your microtome etc. and actually measure all those “persistent variables that are occasionally updated”, which comprise a person’s unique memories, beliefs, desires, etc. I think the first research program (understanding the brain algorithm) doesn’t require a thorough understanding of neuron electrophysiology. For example (copying from §3.1 here), suppose that I want to model a translator (specifically, a MOSFET). And suppose that my model only needs to be sufficient to emulate the calculations done by a CMOS integrated circuit. Then my model can be extremely simple—it can just treat the transistor as a cartoon switch. Next, again suppose that I want to model a transistor. But this time, I want my model to accurately capture all measurable details of the transistor. Then my model needs to be mind-bogglingly complex, involving many dozens of obscure SPICE modeling parameters

Honestly, I'm not sure. I read about the biosphere 2 experiments a while ago and they pretty much failed to make a self sustaining colony with only a few people and way more mass than we could practically get into space. I really want us as a species to keep working on that so we can solve any potential problems in parallel with our development of rockets or other launch systems. I could see a space race esque push getting us there in under a decade but there currently isn't any political or commercial motivation to do that. I don't know if it would necess... (read more)

1samuelshadrach
Got it. I’m not sure but I think building a colony (or hiding in an existing colony) in a remote rainforest or mountaineous region  may be easier to achieve if the goal is just security through obscurity. Also easier to be self-sustaining, atleast with today’s tech. There’s many such groups of people that exist today, that are mostly self-sustaining yet don’t produce enough surplus that anyone else cares to find out what they’re doing.  My guess is it’ll be one of the nuclear powers who will build the first space colony to begin with so it’ll be theirs by default, no conquering needed. Also the US defence establishment in particular has a history of wanting ownership and soft power over emerging technologies long before it’s obvious what the commercial value from it will be, and I don’t see that as irrational from their point of view. 

I don't know about inevitable but I imagine that it is such an attractive option to governments that if the technology gets there it will be enacted before laws are passed preventing it, if any ever are. I would include a version of this where it is practically mandatory through incentives like greatly increased cost of insurance, near inability to defend yourself in a court or cross borders if you lacked it, or it just becomes the social norm to give up as much data about yourself as possible.

That said, I also think that if things go well we will have goo... (read more)

1samuelshadrach
Hey. Thanks for the reply.  “Self sustaining” seems like the key word here. The colony would need independent supply of food, water and energy, and it would need independent military and government. What time scale are you thinking around? And do you expect space colonies to obtain this level of political freedom from existing nuclear powers? If yes why? 

I have a vague sense that these two people live in my brain and are constantly arguing and that argument is fundamentally unproductive and actively harmful for whoever should be dominant, if either.

I am very interested in mind uploading

I want to do a PhD in a related field and comprehensively go through "whole brain emulation: a roadmap" and take notes on what has changed since it was published

If anyone knows relevant papers/researchers that would be useful to read for that or so I can make an informed decision on where to apply to gradschool next year, please let me know

Maybe someone has already done a comprehenisve update on brain emulation I would like to know and I would still like to read more papers before I apply to grad school

2Steven Byrnes
Good luck! I was writing about it semi-recently here. General comment: It’s also possible to contribute to mind uploading without getting a PhD—see last section of that post. There are job openings that aren’t even biology, e.g. ML engineering. And you could also earn money and donate it, my impression is that there’s desperate need.
4Garrett Baker
Those invited to the foresight workshop (also the 2023 one) are probably a good start, as well as foresight’s 2023 and 2024 lectures on the subject.

My interpretation of that was whenever you're having an opinion or discussion in which facts are relevant, make sure you actually know the statistics. An example is an argument (discussion?) my whole family had mid covid. The claim of some people was that generally, covid was only as bad as the flu. Relevant statistics were readily available for things like mortality rate and total deaths that some people making said claim were ignorant of (off by OOMs). With covid it seems obvious but for other things maybe not. Things people frequently have strong opinio... (read more)

I agree with a lot of what you said but I am generally skeptical of any emulation that does not work from a bottom up simulation of neurons. We really don't know about how and what causes consciousness and I think that it can't be ruled out that something with the same input and outputs at a high level misses out on something important that generate consciousness. I don't necessarily believe in p zombies, but if they are possible then it seems they would be built by creating something that copies the high level behavior but not the low level functions. Als... (read more)