Stuart_Armstrong comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 19 May 2012 07:41:35PM *  2 points [-]

"All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it's worth worrying about."

Why not argue for this directly, instead of making a much stronger claim ("may not" vs "very unlikely")? If you make a claim that's too strong, that might lead people to dismiss you instead of thinking that a weaker version of the claim could still be valid. Or they could notice holes in your claimed position and be too busy trying to think of attacks to have the thoughts that you're hoping for.

(But take this advice with a big grain of salt since I have little idea how academic philosophy works in practice.)

Comment author: Stuart_Armstrong 22 May 2012 12:40:49PM *  0 points [-]

Actually scratch that and reverse it - I've got an idea how to implement your idea in a nice way. Thanks!