You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

CarlShulman comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread.

Comment author: CarlShulman 01 August 2013 07:04:06PM 7 points [-]

The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn't seem very helpful.

Obviously, an intelligence looking only to grow itself (and maximize paperclips or whatever) can do this much more easily than one restrained by its biological-or-similar parents.

I disagree. See this post, and Armstrong and Sandberg's analysis.

Comment author: Benja 01 August 2013 10:36:09PM 3 points [-]

The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn't seem very helpful.

To put this point slightly differently, the Fermi paradox isn't strong evidence for any of the following over the others: (a) Humanity will create Friendly AI; (b) humanity create Unfriendly AI; (c) humanity will not be able to produce any sort of FOOMing AI, but will develop into a future civilization capable of colonizing the stars. This is because all of these, if the analog had in the past happened on an alien planet sufficiently close to us (e.g. in our galaxy), we would see the difference: to the degree that the Fermi paradox provides evidence about (a), (b) and (c), it provides about the same amount of evidence against each. (It does provide evidence against each, since one possible explanation for the Fermi paradox is a Great Filter that's still ahead of us.)

Comment author: chaosmage 01 August 2013 10:57:50PM *  1 point [-]

Brilliant links, thank you!

An FAI will always have more rules to follow ("do not eat the ones with life on them") and I just don't see how these would have advantages over a UFAI without those restrictions.

Among the six possibilities at the end of Armstrong and Sandberg's analysis, the "dominant old species" scenario is what I mean - if there is one, it isn't a UFAI.

Comment author: Luke_A_Somers 02 August 2013 02:00:26PM *  1 point [-]

A UFAI would well have more rules to follow, but these rules will not be as well chosen. It's not clear that these rules will become negligible.