You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on What is the best compact formalization of the argument for AI risk from fast takeoff? - Less Wrong Discussion

11 Post author: utilitymonster 13 March 2012 01:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 13 March 2012 09:54:36PM 6 points [-]

(I am currently writing up a post for my personal blog where I list all requirements that need to be true in conjunction for SIAI to be the best choice when it comes to charitable giving.)

Be careful, it's very common for people to gerrymander such probability estimates by unjustifiably assuming complete independence or complete dependence of certain terms. (This is true even if the "probability estimate" is only implicit in the qualitative structure of the argument.) If people think that's what you're doing then they're likely to disregard your conclusions even if the conclusions could have been supported by a weaker argument.

Comment author: gwern 13 March 2012 10:16:58PM 1 point [-]

I've just pointed out something very similar.