You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Steelmanning MIRI critics - Less Wrong Discussion

6 Post author: fowlertm 19 August 2014 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: Stuart_Armstrong 19 August 2014 01:19:14PM *  4 points [-]

Some of the stuff I've posted - http://lesswrong.com/lw/ksa/the_metaphormyth_of_general_intelligence/ , http://lesswrong.com/lw/hvo/against_easy_superintelligence_the_unforeseen/ - could be used to build a good anti-MIRI steelman, but I've not seen them used.

The most convincing anti-MIRI argument? AI may not develop in the way you're imagining. The most convincing rebuttal? We only need a decent probability of that happening to justify worrying about it.