CarlShulman comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 04 November 2010 07:49:15PM *  5 points [-]

So, I think that the formalization will lead to the conclusion that "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity" "we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity"

I agree with both those statements, but think the more relevant question would be:

"conditional on it turning out, to the enormous surprise of most everyone in AI, that this AGI design is actually very close to producing an 'artificial toddler', what is the sign of the expected effect on the probability of an OK outcome for the world, long-term and taking into account both benefits and risks?" .

Comment author: lukeprog 10 October 2012 08:10:28AM *  0 points [-]

[deleted]