Eli Lifland discusses AI risk probabilities here.
Scott Alexander talks about how everything will change completely in this post, and then says "There's some chance I'm wrong about a singularity, there's some chance we make it through the singularity, and if I'm wrong about both those things I'd rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren't that different in the grand scheme of things. I'd feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don't plan on committing suicide to escape and I don't see why I should be not bringing life into the world either.". I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".
Will MacAskill says that "conditional on misaligned takeover, I think like 50/50 chance that involves literally killing human beings, rather than just disempowering them", but "just" being disempowered does not seem like a great alternative, and I do not know why the AI would care for disempowered humans in a good way.
It seems to me that the world into which children are born today has a high likelihood of being really bad. Is it still a good idea to have children, taking their perspective into account and not just treating them as fulfilling the somehow hard-wired preferences of the parents?
I am currently not only confused, but quite gloomy, and would be grateful for your opinions. Optimistic ones are welcome, but being realistic is more important.
The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soon" might be off the table, "sudden" is still obviously good tactics. Nuclear war plus protracted conventional war, Skynet-style, makes a great movie, but would be foolish vs even biowarfare. Depending on what is physically possible for a germ to do (and I know of no reason why "long asymptomatic latent phase" and "highly contagious" and "short lethal active phase" isn't a consistent combination, except that you could only reach it by deliberate engineering rather than gradual evolution), we could all be dead before anyone was sure we were at war.
I don't doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.