You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 05 August 2013 11:38:40PM 2 points [-]

Agreed, but if both eat galaxies with very high probability, it's still a bit of a lousy explanation. Like, if it were the only explanation we'd have to go with that update, but it's more likely we're confused.

Comment author: RobbBB 05 August 2013 11:41:26PM *  0 points [-]

Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.