Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Whilst I really, really like the last picture - it seems a little odd to include it in the article.
Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?