Kaj_Sotala comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gedusa 14 November 2011 12:30:48PM 2 points [-]

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.

Comment author: Kaj_Sotala 15 November 2011 12:15:15PM *  2 points [-]

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery - it's not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it's the idea of colonizers in general.

Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that "somewhat" depends on whether you think it's reasonable to presume that we've already passed the Great Filter.)