My comment to a discussion of great filters/existential risk:

How likely is it that a UFAI disaster would produce effects we can see from here? I think "people can't suffer if they're dead" disasters (failed attempt at FAI) is possibly more likely than paperclip maximizers.

Not sure what a money-maximizing UFAI disaster would look like, but I can't think of any reason it would be likely to go far off-planet.

National dominance-maximizing UFAI is a hard call, but possibly wouldn't go off-planet. It would depend on whether it's looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.

New Comment
12 comments, sorted by Click to highlight new comments since:

AI never stops. It only stops if it estimates "stopping" to be the most optimal decision, and it'd need to be specifically programmed to have this strange goal.

(If you try to unpack the concept of "stopping", you'll see just how strange it is. The AI just sitting in one place exerts gravitational attraction on all the galaxies in its light cone, so what makes dismantling all the stars different? Which of the two is preferable? If the AI is indifferent between the two, it can just toss a coin.)

In any other case, something else will be better than "stopping". If it estimates that taking over the universe has a tiny chance of making the outcome a tiny bit better than if it stops, it'll do it.

Money = Free Energy.

Dyson sphere = huge profit center.

You don't know how an AI given a vague concept of "money" would wind up cashing it out on reflection. Really big numbers in memory is another possibility.

The limitation on how large a number the AI could store in memory would likley be free energy.

Or it could be programmed to recognize only the currency of some central bank, in which case it would force the mint to make literally astronomical amounts of money.

Or the bank could just tell the AI that it had an infinite amount of money, which might make it stop.

Not sure what a money-maximizing UFAI disaster would look like, but I can't think of any reason it would be likely to go far off-planet.

Tile the universe with banknotes? Convert all matter into RAM to represent the greatest possible bank account balance in binary code?

National dominance-maximizing UFAI is a hard call, but possibly wouldn't go off-planet. It would depend on whether it's looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.

Elimination of all enemies implies going exploring the entire universe to find all existing and potential enemies and eliminate them.

My point: how easy or how hard it is for us to think of a contrived scenario where UFAI spreads out, or where it doesn't, is pure imagination. Neither case tells us much about the actual probabilities of this happening.

Mass spectroscopy should ultimately be able to detect sufficiently widespread living systems of any type.

Would being seen be an advantage for them? (answering question with a question, still...)

If you'll pardon updating off of fictional evidence: the malignant AI in "A Fire Upon the Deep" stays hidden until it has the capability to explode across space--it might be the case that an UFAI which was in conflict with its creators would expect more conflict and therefore quiet down.

Also I think the failed FAI concept seems somewhat reasonable--if the AI had some basic friendliness that made it go looking for morality, but in the meantime its moral instincts involved turning people into paperclips rather than pulling babies from in front of trains it might eventually "catch on" and feel really terrible about everything, then decide that it wasn't able to be confident in its metaethics and it would be better to commit suicide.

Of course I haven't got much expertise in the subject so I feel like I may have just created a more complicated and therefore less likely scenario than I anticipated. I do still think that various forms of failed FAI (is this a term worth canonizing? An AI with some incomplete friendliness architecture is a very small subset of UFAI) would be relatively populous in the design space of "minds that humans would design," even if they are rare in the space of all possible minds.

The notion I was thinking of was a program which is tasked to increase dominance for real Americans.

Unfortunately, the specs for real Americans weren't adequately thought out, so the human race is destroyed.

I don't think such a program is likely to spread further than the human race did.

More fictional evidence, from John Brunner's The Jagged Orbit.

N cebtenz vf frg gb znkvzvmvat cebsvgf (be cbffvoyl ergheaf) sbe n jrncbaf pbzcnal. Ntnvafg gur nqivpr bs grpuf, znantrzrag vafvfgf ba gheavat hc gur vapragvirf gbb uvtu (be znxvat gur gvzr senzr gbb fubeg-- vg'f orra n juvyr fvapr V'ir ernq vg).

Gur pbzcnal nqiregvfrf cnenabvn-- gur bayl jnl gb or fnsr vf gb unir zber naq zber cbjreshy crefbany jrncbaf. Vg oernxf pvivyvmngvba. Ab zber cebsvgf.

VVEP, gur pbzchgre cebtenz vairagf gvzr geniry gb jvcr vgfrys bhg naq erfbyir gur qvyrzan.

I don't think such a program is likely to spread further than the human race did.

Real Americans have watched Star Trek. Of course they'll tell the AI to go forth and subjugate all the funny-eared aliens across the galaxy :-)

That makes sense. My thoughts were basically along the lines that the space of AIs with goals centered around their creators which later peter out after their creators are destroyed is probably bigger than I gave it credit for.

I'm sad that I don't get to read the second half of your comment because I haven't read that book and intend to eventually read as much science fiction recommended hear as possible.