1 min read2nd Apr 20245 comments
This is a special post for quick takes by NickH. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

5 comments, sorted by Click to highlight new comments since: Today at 3:54 AM

Have I missed something or is everyone ignoring the obvious problem with a superhuman AI with potentially limitless lifespan? It seems to me that such an AI, whatever its terminal goals, must, as an instrumental goal, prioritise seeking out and destroying any alien AI because, in simple terms, the greatest threat to it tiling the universe with tiny smiling human faces is an alien AI set on tiling the universe with tiny, smiling alien faces and, in a race for dominance, every second counts.
The usual arguments about logarithmic future discounting do not seem appropriate for an immortal intelligence.

This seems like a relatively standard argument, but I also struggle a bit to understand why this is a problem. If the AI is aligned it will indeed try to spread through the universe as quickly as possible, eliminating all competition, but if shares our values, that would be good, not bad (and if we value aliens, which I think I do, then we would presumably still somehow trade with them afterwards from a position of security and stability).

I'm not clear on what you're calling the "problem of superhuman AI"?

I've heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a "well aligned" AI will prioritise inhuman instrumental goals.

I'm not quite understanding yet. Are you saying that an immortal AGI will prioritize preparing to fight an alien AGI, to the point that it won't get anything else done? Or what?

Immortal expanding AGI is a part of classic alignment thinking, and we do assume it would either go to war or negotiate with an alien AGI if it encounters one, depending on the overlap in their alignment/goals.