You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on Open thread, Jan. 26 - Feb. 1, 2015 - Less Wrong Discussion

6 Post author: Gondolinian 26 January 2015 12:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (431)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 31 January 2015 10:59:13PM 1 point [-]

Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.

Comment author: G0W51 01 February 2015 02:04:17AM 0 points [-]

Good point about AGI probably not being the Great Filter. I didn't mean space colonization would prevent existential risks from AI though, just general threats.

So, we've established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?