timtyler comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gedusa 14 November 2011 12:30:48PM 2 points [-]

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.

Comment author: timtyler 14 November 2011 12:57:29PM *  0 points [-]

Katja's blog post on the topic is here.

The claim that the argument there is significant depends strongly on this - where I made some critical comments.