Review

Given the shortening timelines (https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon, https://www.lesswrong.com/posts/CvfZrrEokjCu3XHXp/ai-practical-advice-for-the-worried), perhaps it's time to think about what "plan B" should actually be if the "plan A" of solving the alignment problem does not succeed.

For many people the answer is a simple "then we will all die". But let's suppose for a moment that we don't get off the hook that easily-- that AGI doomsday happens in such a way that there are survivors.

What can one do right now to maximize one's chances of being among the survivors if unaligned AGI is created? What do you think an AI prepper would be like, compared and contrasted to, for example, a nuclear war prepper?

New Comment
8 comments, sorted by Click to highlight new comments since:

There are a fair number of people who like the movie aesthetic of an underground bunker full of supplies, from which they will eventually emerge into the Fallout video game. In the context of AI risk, this is not remotely realistic. AI emergence scenarios mostly separate into worlds where everyone lives and worlds where everyone dies; near-misses are dystopias and weirdtopias, not most-people-die-but-not-everyone. So I don't think there's likely to be anything promising that would fall under the label "prepping", as conventionally understood.

I meant prepping metaphorically, in the see of being willing to delve into the specifics of a scenario most other people would dismiss as unwinnable. The reason I posted this is that though it's obvious that the bunker approach isn't really the right one, I'm drawing a blank for what the right approach would even look like.

That being said, I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI. Are you saying scenarios where many but not all people die due to political/economic/environmental consequences of AI emergence are unlikely enough to disregard?

So let's talk about dystopias/wierdtopias. Do you see any categories into which these can be grouped? The question then becomes, who will lose the most and who will lose the least under various types of scenarios.

I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI.

After the nuclear war caused by the AI, there's likely still an unaligned AI out there. That AI is likely going to kill the survivors of the nuclear war. 

Few people have done this sort of thinking here, because this community mostly worries about risks from general, agentic AI and not narrow AI. We worry about systems that will decide to eliminate us, and have the superior intelligence to do so. Survivor scenarios are much more likely from risks of narrow AI accidentally causing major disaster. And that's mostly not what this community worries about.

What kind of AGI doomsday scenario do you have where there are human survivors? If you don't have a scenario, it's difficult to talk about how to prep for it.

I guess scenarios where humans occupy a niche analogous to animals that we don't value but either cannot exterminate or choose not to.

Given need a lot of space to grow food and live that AGI could use for other things. Humans don't to "niche" well.

I think it is likely in the case of AGI / ASI that removing humanity from the equation will be either a side effect of it seeking its goals (it will take resources) or the instrumental goal itself (for example to remove risk or to lose fewer resources later on defenses).

In both cases it is likely it will find the optimal value of resources used to eliminate humanity vs the effectiveness of the end result. This means that there may be some survivors, possibly not many, and technologically moved to the stone age at best.

Bunkers likely won't work. Living with stone tools and without electricity in a hut in the middle of the forest very remote from any cities and having no minable resources under feet may work for some time. Likely AGI won't bother finding remote camps of single or few humans without any signs of technology being used.

Of course only if that AGI won't find a low-resource solution to eliminate all humans, no matter where they are. This is possible and then nothing will help, no prepping is possible. 

I'm not sure it's the default though. For very specialized cases like creating nanotechnology to wipe humans in a synchronized manner it might very possibly find out the time or computational resources needed to develop it through simulations is too great and it is not worth the cost vs options that need fewer resources. It is not like computational resources are free and costless for AGI (it won't pay in money but will do less research/thinking in other fields having to deal with that, it may delay plans to do it this way). It is pretty likely it will use a less sophisticated but very resource-efficient and fast solution that may not kill all humans but enough.

Edit: I want to add a reason why I think that. One may think that very fast ASI will very quickly develop a perfect way to remove all humans effectively and without anyone left (if there is a case that's the most sensible thing to do to either remove risk or claim all needed resources or other reasons that are instrumental). I think this is wrong because even for ASI there are some bottlenecks. For a sensible and quick plan that also needs some advanced tech like one with nanomachinery or proteins, you need to do some research beyond what we humans already have and know. This means it needs more data and observations, maybe also simulations, to gather more knowledge. ASI might be very quick at reasoning, recalling, and thinking. Still will be limited by data input, experiments machinery accessible, and computational power to make very detailed simulations. So it won't create such a plan in detail in an instant by pure thought. Therefore it would take into account time and the resources needed to develop plan details and to gather needed data. This means it will see an incentive to make a simpler and faster plan that will remove most of the humans instead of a more complex way to remove all humans. ASI should be good at optimizing such things, not over-focusing on instrumental goals (like often depicted in fiction).