Kaj_Sotala comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery - it's not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it's the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that "somewhat" depends on whether you think it's reasonable to presume that we've already passed the Great Filter.)