JoshuaZ comments on Open thread, Jan. 26 - Feb. 1, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (431)
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Good point about AGI probably not being the Great Filter. I didn't mean space colonization would prevent existential risks from AI though, just general threats.
So, we've established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?