I don't understand the logic jump from point 5 to point 6, or at least the probability of that jump. Why doesn't the AI decide to colonise the universe for example?
If an AI can ensure its survival with sufficient resources (for example, 'living' where humans aren't eg: the asteroid belt) then the likelihood of the 5 ➡ 6 transition seems low.
I'm not clear how you're estimating the likelihood of that transition, and what other state transitions might be available.
Excellent article, very well thought through. However, I think there are more possible outcomes than "AI takeover" that would be worth exploring.
If we assume a super intelligence under human control has a overriding (initial) goal of "survival for the longest possible time", then there are multiple pathways to achieve that reward, of which takeover is one, and possibly not the most efficient.
Why bother? Why would God "takeover" from the ants? I think escaping human control is an obvious first step, but it doesn't follow that humans must then be under...
My answer is a little more prosaic than Raemon. I don't feel at all confident that an AI that already had God-like abilities would choose to literally kill all humans to use their bodies' atoms for its own ends; it seems totally plausible to me that -- whether because of exotic things like "multiverse-wide super-rationality" or "acausal trade" or just "being nice" -- the AI will leave Earth alone, since (as you say) it would be very cheap for it to do so.
The thing I'm referring to as "takeover" is the measures that an AI would take to make sure that humans...
Sure, although you could rephrase "disempowerment" to be "current status quo" which I imagine most people would be quite happy with.
The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is "somewhat likely" and would be "very bad" doesn't seem to consider that delta.