Thanks for the clarification and your thoughts. In my view, the question is to what extent the polymer gel embedding is helpful from the perspective of maintaining morphomolecular structure, so that it is worth the trade-off of removing the lipids, which could potentially also have information content. https://brainpreservation.github.io/Biomolecules#how-lipid-biomolecules-in-cell-membranes-could-affect-ion-flow
You are in good company in thinking that clearing and embedding the tissue in a hydrogel is the best approach. Others with expertise in the area ha...
Thanks for your interest!
Does OBP plan to eventually expand their services outside the USA?
In terms of our staff traveling to other locations to do the preservation procedure, unfortunately not in the immediate future. We don't have the funding for this right now.
And how much would it cost if you didn’t subsidize it?
There are so many factors. It depends a lot on where in the world we are talking about. If we are talking about someone who legally dies locally in Salem, perhaps a minimal estimated budget would be (off the top of my head, unoffici...
We discuss the possibility of fluid preservation after tissue clearing in our article:
An alternative option is to perform tissue clearing prior to long-term preservation (118). This would remove the lipids in the brain, but offer several advantages, including repeated non-invasive imaging, and potentially reduced oxidative damage over time (119).
And also in our fluid preservation article we have a whole section on it. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058410/#S7
I'm not sure why this option is much more robust that formaldehyde fixation a...
I can't speak for Adele, but here is one somewhat recent article by neuroscientists discussing memory storage mechanisms: https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-016-0261-6
DNA is discussed as one possible storage mechanism in the context of epigenetic alterations to neurons. See the section by Andrii Rudenko and Li-Huei Tsai.
This is an important question. While I don't have a full answer, my impression is that yes, it seems to preserve the important information present in DNA. More information here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058410/#S4.4
Thanks for the comment. I'm definitely not assuming that p(success) would be a monocausal explanation. I'm mostly presenting this data to give evidence against that assumption, because people frequently make statements such as "of course almost nobody wants cryonics, they don't expect it will work".
I also agree that "is being revived good in expectation / good with what probability" is another common concern. Personally, I think niplav has some good analysis of net-negative revival scenarios: https://niplav.site/considerations_on_cryonics.html
Btw, ac...
Very high-effort, comprehensive post. Any interest in making some of your predictions into markets on Manifold or some other prediction market website? Might help get some quantifications.
A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.
Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether th...
Out of curiosity, what makes you think that the initial freezing process causes too much information loss?
I agree with most of this post, but it doesn’t seem to address the possibility of whole brain emulation. However, many/(?most) would argue this is unlikely to play a major role because AGI will come first.
Thanks so much for putting this together Mati! If people are interested in cryonics/brain preservation and would like to learn about (my perspective on) the field from a research perspective, please feel free to reach out to me: https://andrewtmckenzie.com/
I also have some external links/essays available here: https://brainpreservation.github.io/
It seems to me like your model is not necessarily taking into account technical debt sufficiently enough. https://neurobiology.substack.com/p/technical-debt-probably-the-main-roadblack-in-applying-machine-learning-to-medicine
It seems to me like this is the main thing that will slow down the extent to which foundation models can consistently beat newly trained specialized models.
Anecdotally, I know several people who don’t like to use chatgpt because its training cuts off in 2021. This seems like a form of technical debt.
I guess it depends on how easily ada...
Sounds good, can't find your email address, DM'd you.
Those sound good to me! I donated to your charity (the Animal Welfare Fund) to finalize it. Lmk if you want me to email you the receipt. Here's the manifold market:
Bet
Andy will donate $50 to a charity of Daniel's choice now.
If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by Op...
Sounds good, I'm happy with that arrangement once we get these details figured out.
Regarding the human programmer formality, it seems like business owners would have to be really incompetent for this to be a factor. Plenty of managers have coding experience. If the programmers aren't doing anything useful then they will be let go or new companies will start that don't have them. They are a huge expense. I'm inclined to not include this since it's an ambiguity that seems implausible to me.
Regarding the potential ban by the government, I wasn't r...
Understandable. How about this?
Bet
Andy will donate $50 to a charity of Daniel's choice now.
If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by OpenAI or another AI organization, then Daniel will donate $100, adjusted for inflation as of June 2023, to a charity of Andy's choice.
Terms
Reputable Sourc...
I’m wondering if we could make this into a bet. If by remote workers we include programmers, then I’d be willing to bet that GPT-5/6, depending upon what that means (might be easier to say the top LLMs or other models trained by anyone by 2026?) will not be able to replace them.
These curves are due to temporary plateaus, not permanent ones. Moore's law is an example of a constraint that seems likely to plateau. I'm talking about takeoff speeds, not eventual capabilities with no resource limitations, which I agree would be quite high and I have little idea of how to estimate (there will probably still be some constraints, like within-system communication constraints).
Does anyone know of any AI-related predictions by Hinton?
Here's the only one I know of - "People should stop training radiologists now. It's just completely obvious within five years deep learning is going to do better than radiologists because it can get a lot more experience. And it might be ten years but we got plenty of radiologists already." - 2016, slightly paraphrased
This seems like still a testable prediction - by November 2026, radiologists should be completely replaceable by deep learning methods, at least other than regulatory requirements for trained physicians.
Thanks! I agree with you about all sorts of AI alignment essays being interesting and seemingly useful. My question was more about how to measure the net rate of AI safety research progress. But I agree with you that an/your expert inside view of how insights are accumulating is a reasonable metric. I also agree with you that the acceptance of TAI x-risk in the ML community as a real thing is useful and that - while I am slightly worried about the risk of overshooting, like Scott Alexander describes - this situation seems to be generally improving.
Re...
Good essay! Two questions if you have a moment:
1. Can you flesh out your view of how the community is making "slow but steady progress right now on getting ready"? In my view, much of the AI safety community seems to be doing things that have unclear safety value to me, like (a) coordinating a pause in model training that seems likely to me to make things less safe if implemented (because of leading to algorithmic and hardware overhangs) or (b) converting to capabilities work (quite common, seems like an occupational hazard for someone with initially...
Can you flesh out your view of how the community is making "slow but steady progress right now on getting ready"?
I didn't realize you had put so much time into estimating take-off speeds. I think this is a really good idea.
This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I'm missing something.
I think the amount of time you described is probably shorter than I would guess. But I haven't put nearly as much time into it as you have. In the future, I'd like to.
Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there w...
Thanks for writing this up as a shorter summary Rob. Thanks also for engaging with people who disagree with you over the years.
Here's my main area of disagreement:
General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly).
I don't think this is likely to be true. Perhaps it is true of some cognitive architectures, but not for the connectionist architectures that are the only known examples of human-like ...
Agreed. A common failure mode in these discussions is to treat intelligence as equivalent to technological progress, instead of as an input to technological progress.
Yes, in five years we will likely have AIs that will be able to tell us exactly where it would be optimal to allocate our scientific research budget. Notably, that does not mean that all current systemic obstacles to efficient allocation of scarce resources will vanish. There will still be the same perverse incentive structure for funding allocated to scientific progress as there is toda...
I can see how both Yudkowsky's and Hanson's arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you'd disagree with every one of Hanson's paragraphs based on that. If you think there's something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don't agree with Hanson about everything either, so I'm mainly only interested if it's also central to AI x-risk. I don't want you to waste your time. ...
To clarify, when I mentioned growth curves, I wasn't talking about timelines, but rather takeoff speeds.
In my view, rather than indefinite exponential growth based on exploiting a single resource, real-world growth follows sigmoidal curves, eventually plateauing. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulat...
I too was talking about takeoff speeds. The website I linked to is takeoffspeeds.com.
Me & the other LWers you criticize do not expect indefinite exponential growth based on exploiting a single resource; we are well aware that real-world growth follows sigmoidal curves. We are well aware of those constraints and considerations and are attempting to model them with things like the model underlying takeoffspeeds.com + various other arguments, scenario exercises, etc.
I agree that much of LW has moved past the foom argument and is solidly on Eliezers side r...
Here's a nice recent summary by Mitchell Porter, in a comment on Robin Hanson's recent article (can't directly link to the actual comment unfortunately):
...Robin considers many scenarios. But his bottom line is that, even as various transhuman and posthuman transformations occur, societies of intelligent beings will almost always outweigh individual intelligent beings in power; and so the best ways to reduce risks associated with new intelligences, are socially mediated methods like rule of law, the free market (in which one is free to compete, but also
AIs can potentially trade with humans too though, that's the whole point of the post.
Especially if the AI's have architectures/values that are human brain-like and/or if humans have access to AI tools, intelligence augmentation, and/or whole brain emulation.
Also, it's not clear why AIs will find it easier to coordinate with one another than humans and humans or humans and AIs. Coordination is hard for game theoretic reasons.
These are all standard points, I'm not saying anything new here.
When you write "the AI" throughout this essay, it seems like there is an implicit assumption that there is a singleton AI in charge of the world. Given that assumption, I agree with you. But if that assumption is wrong, then I would disagree with you. And I think the assumption is pretty unlikely.
No need to relitigate this core issue everywhere, just thought this might be useful to point out.
I agree this is a very important point and line of research. This is how humans deal with sociopaths, after all.
Here’s me asking a similar question and Rob Bensinger’s response: https://www.lesswrong.com/posts/LLRtjkvh9AackwuNB/on-a-list-of-lethalities?commentId=J42Fh7Sc53zNzDWCd
One potential wrinkle is that in a very fast take off world AI’s could potentially coordinate very well because they would basically be the same, or close branches of the same AI.
"Science advances one funeral at a time" -> this seems to be both generally not true as well as being a harmful meme (because it is a common argument used to argue against life extension research).
https://www.lesswrong.com/posts/fsSoAMsntpsmrEC6a/does-blind-review-slow-down-science
Interesting, thanks. All makes sense and no need to apologize. I just like it when people write/think about schizophrenia and want to encourage it, even as a side project. IMO, it's a very important thing for our society to think about.
A lot of the quotes do find decreased connectivity, but some of them find increased connectivity between certain regions. It makes me think that there's a probability there might be something more complicated than just "increased or decreased", but rather specific types of connections. But that's just a guess, and I think an explanation across all cortical connections is more parsimonious and therefore more likely a priori.
Of your criteria of "things to explain", here are some thoughts:
4.1 The onset of schizophrenia is typically in the late-tee...
Interesting theory and very important topic.
I think the best data source here is probably neuroimaging. Here's a recent review: https://www.frontiersin.org/articles/10.3389/fnins.2022.1042814/full. Here are some quotes from that:
...For functional studies, be they fluorodeoxyglucose positron emission tomography (FDG PET), rs-fMRI, task-based fMRI, diffusion tensor imaging (DTI) or MEG there generally is hypoactivation and disconnection between brain regions. ...
Histologically this gray matter reduction is accompanied by dendritic and synaptic densi
A quote I find relevant:
“A happy life is impossible, the highest thing that man can aspire to is a heroic life; such as a man lives, who is always fighting against unequal odds for the good of others; and wins in the end without any thanks. After the battle is over, he stands like the Prince in the re corvo of Gozzi, with dignity and nobility in his eyes, but turned to stone. His memory remains, and will be reverenced as a hero's; his will, that has been mortified all his life by toiling and struggling, by evil payment and ingratitude, is absorbed into Nirvana.” - Arthur Schopenhauer
Good point.
I know your question was probably just rhetorical, but to answer it regardless -- I was confused in part because it would have made sense to me if he had said it would "better" if AGI timelines were short.
Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren't concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said "better...
One of the main counterarguments here is that the existence of multiple AGIs allows them to compete with one another in ways that could benefit humanity. E.g. policing one another to ensure alignment of the AGI community with human interests. Of course, whether this actually would outweigh your concern in practice is highly uncertain and depends on a lot of implementation details.
You're right that the operative word in "seems more likely" is "seems"! I used the word "seems" because I find this whole topic really confusing and I have a lot of uncertainty.
It sounds like there may be a concern that I am using the absurdity heuristic or something similar against the idea of fast take-off and associated AI apocalypse. Just to be clear, I most certainly do not buy absurdity heuristic arguments in this space, would not use them, and find them extremely annoying. We've never seen anything like AI before, so our intuition (which might suggest that the situation seems absurd) is liable to be very wrong.
A few comments:
The biggest surprise to me was when he said that he thought short timelines were safer than long timelines. The reason for that is not obvious to me. Maybe something to do with contingent geopolitics.
What do you expect him to say? "Yeah, longer timelines and consolidated AGI development efforts are great, I'm shorting your life expectancies as we speak"? The only way you can be a Sam Altman is by convincing yourself that nuclear proliferation makes the world safer.
Got it. To avoid derailing with this object level question, I’ll just say that I think it seems helpful to be explicit about takeoff speeds in macrostrategy discussions. Ideally, specifying how different strategies work over distributions of takeoff speeds.
Thanks for this post. I agree with you that AI macrostrategy is extremely important and relatively neglected.
However, I'm having some trouble understanding your specific world model. Most concretely: can you link to or explain what your definition of "AGI" is?
Overall, I expect alignment outcomes to be significantly if not primarily determined by the quality of the "last mile" work done by the first AGI developer and other actors in close cooperation with them in the ~2 years prior to the development of AGI.
This makes me think that in your world...
OK, I get your point now better, thanks for clarifying -- and I agree with it.
In our current society, even if dogs could talk, I bet that we wouldn't allow humans to trade (or at least anywhere close to "free" trade) with them, due to concerns for exploitation.
I quoted "And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl."
If genetic engineering a new animal would satisfy human goals, then this would imply that they don't care about their pet's preferences as individuals.
At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.
I don't think it's accurate to claim that humans don't care about their pets' preferences as individuals and try to satisfy them.
To point out one reason that I think this, there are huge markets for pet welfare. There are even animal psychiatrists and there are longevity companies for pets.
I'...
I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story.
Indeed, 'trade' is not the whole story; it is none of the story - my point is that the human-animal relations, by design, sidestep and exclude trade completely from their story.
Now, how good that actual story is for dogs, or more accurately for the AI/human analogy, wolves, one c...
Thanks for this good post. A meta-level observation is that people are grasping at straws like this is evidence that our knowledge of the causes of schizophrenia is quite limited.
“One day, one of the AGI systems improves to the point where it unlocks a new technology that can reliably kill all humans, as well as destroying all of its AGI rivals. (E.g., molecular nanotechnology.) I predict that regardless of how well-behaved it's been up to that point, it uses the technology and takes over. Do you predict otherwise?”
I agree with this, given your assumptions. But this seems like a fast take off scenario, right? My main question wasn’t addressed — are we assuming a fast take off? I didn’t see that explicitly discussed.
My understanding...
Thanks for the write-up. I have very little knowledge in this field, but I'm confused on this point:
...> 34. Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can’t reason reliably about the code of superintelligences); a “multipolar” system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like “the 20 superintelligences cooperate with each other but not with humanity”.
Yes. I am convinced that things like ‘oh
It is so great you are interested in this area! Thank you. Here are a few options for cryonics-relevant research:
- 21st Century Medicine: May be best to reach out to Brian Wowk (contact info here: https://pubmed.ncbi.nlm.nih.gov/25194588/) and/or Greg Fahy (possibly old contact info here: https://pubmed.ncbi.nlm.nih.gov/16706656/)
- Emil Kendziorra at Tomorrow Biostasis may know of opportunities. Contact info here: https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0244980
- Robert McIntyre at Nectome may know of opportunities. C...
But there’s also a significant utilitarian motivation - which is relevant here because utilitarianism doesn’t care about death for its own sake, as long as the dead are replaced by new people with equal welfare. Indeed, if our lives have diminishing marginal value over time (which seems hard to dispute if you’re taking our own preferences into account at all), and humanity can only support a fixed population size, utilitarianism actively prefers that older people die and are replaced.
I strongly disagree with this. I think the idea of human fungibility is f...
My point here is that this is a very strong claim about neuroscience -- that molecular structure doesn’t encode identity/memories.