Attempt at the briefest content-full Less Wrong post:
Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)
But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.
Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.
Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.