All of greg's Comments + Replies

Sure, although you could rephrase "disempowerment" to be "current status quo" which I imagine most people would be quite happy with.

The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is "somewhat likely" and would be "very bad" doesn't seem to consider that delta.

2Matthew Barnett
I agree with you here to some extent. I'm much less worried about disempowerment than extinction. But the way we get disempowered could also be really bad. Like, I'd rather humanity not be like a pet in a zoo.

I don't understand the logic jump from point 5 to point 6, or at least the probability of that jump. Why doesn't the AI decide to colonise the universe for example?

If an AI can ensure its survival with sufficient resources (for example, 'living' where humans aren't eg: the asteroid belt) then the likelihood of the 5 ➡ 6 transition seems low.

I'm not clear how you're estimating the likelihood of that transition, and what other state transitions might be available.

4Matthew Barnett
It could decide to do that. The question is just whether space colonization is performed in the service of human preferences or non-human preferences. If humans control 0.00001% of the universe, and we're only kept alive because a small minority of AIs pay some resources to preserve us, as if we were an endangered species, then I'd consider that "human disempowerment".
gregΩ02-4

Excellent article, very well thought through. However, I think there are more possible outcomes than "AI takeover" that would be worth exploring.

If we assume a super intelligence under human control has a overriding (initial) goal of "survival for the longest possible time", then there are multiple pathways to achieve that reward, of which takeover is one, and possibly not the most efficient. 

Why bother? Why would God "takeover" from the ants? I think escaping human control is an obvious first step, but it doesn't follow that humans must then be under... (read more)

My answer is a little more prosaic than Raemon. I don't feel at all confident that an AI that already had God-like abilities would choose to literally kill all humans to use their bodies' atoms for its own ends; it seems totally plausible to me that -- whether because of exotic things like "multiverse-wide super-rationality" or "acausal trade" or just "being nice" -- the AI will leave Earth alone, since (as you say) it would be very cheap for it to do so.

The thing I'm referring to as "takeover" is the measures that an AI would take to make sure that humans... (read more)

3Raemon
The issue is that the earth is made of resources the AI will, by default, want to use for it's own goals.