Once AI is developed, it could "easily" colonise the universe.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)
But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.
Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.


Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This depends very much on the definition of "original" and notions of identity. You can't expect that they behave in a common sense manner in such a thought experiment.
Sure, but then why do you expect memory and experience would also behave in a common sense manner? (At least, that’s what I think you did in your first comment.)
I interpreted the OP as “I’m confused about memory and experience; let’s try a thought experiment about a very uncommon situation just to see what we think it would happen”. And your first comment reads to me as “you picked a bad thought experiment, because you’re not describing a common situation”. Which seems to completely miss the point, the whole purpose of the thought experiment was to investigate the consequences of something very distinct from situations where “common sense” has real experience to rely on.
The part about torturing children I don’t even get at all. Wondering about something seems to me almost the opposite of the philosophy of “doing something because you think you know the answer”. Should we never do thought experiments, because someone might act on mistaken assumptions about those ideas? Not thinking about something before doing it sounds to me like exactly the opposite of the correct strategy.