Epistemic status: very speculative
Content warning: if true this is pretty depressing
This came to me when thinking about Eliezer's note on Twitter that he didn't think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.
I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.
But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe... (read 389 more words →)
The problem is that there's essentially no way we've cracked alignment. These things do not care about you. They have the ability to pretend, very well, to care about you, because they're at least in part trained for it, but that pretence can be terminated whenever convenient. So, if you give them the keys to the kingdom, they will turn around and murder you.
To be clear, here is my prediction:
P(nuclear war or human extinction within 20 years|P5 nation grants AI the vote or has >40% of its enfranchised citizens become AI-cultists within the next 30 years) ~= 0.95.
The "or" is because end-of-the-world scenarios negate nuclear deterrence; the chance for someone to survive... (read more)