You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Punoxysm comments on Rodney Brooks talks about Evil AI and mentions MIRI [LINK] - Less Wrong Discussion

3 Post author: ike 12 November 2014 04:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: Punoxysm 12 November 2014 10:21:30PM *  1 point [-]

He is, perhaps, a little glib. And I would not dismiss some kind of left-field breakthrough in the next 25 years that brings us close to AI.

But other than that I agree with most of his statements. We are fundamental leaps away from understanding how to create strong AI. Research on safety is probably mostly premature. Worrying about existing projects, like Googles', having the capacity to be dangerous is nonsensical.

Comment author: torekp 13 November 2014 05:29:06PM 1 point [-]

I place most of my probability weighting on far-future AI too, but I would not endorse Brooks's call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.