You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on Muehlhauser-Goertzel Dialogue, Part 1 - Less Wrong Discussion

28 Post author: lukeprog 16 March 2012 05:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 19 March 2012 03:05:04AM *  0 points [-]

Then you say that it implies a further controversial conclusion that many around here disagree with

I'm not quite sure that many around here disagree with it as such; I may be misinterpreting User:timtyler, but the claim isn't necessarily that arbitrary superintelligences will contribute to "moral progress", the claim is that the superintelligences that are actually likely to be developed some decades down the line are likely to contribute to "moral progress". Presumably if SingInst's memetic strategies succeed or if the sanity waterline rises then this would at least be a reasonable expectation, especially given widely acknowledged uncertainty about the exact extent to which value is fragile and uncertainty about what kinds of AI architectures are likely to win the race. This argument is somewhat different than the usual "AI will necessarily heed the ontologically fundamental moral law" argument, and I'm pretty sure User:timtyler agrees that caution is necessary when working on AGI.