orthonormal comments on Discussion: Yudkowsky's actual accomplishments besides divulgation - Less Wrong

31 Post author: Raw_Power 25 June 2011 11:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (115)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 28 June 2011 04:40:28AM 3 points [-]

Anyway, the point is to find out if a transhuman AI would mind-control the operator into letting it out. Eliezer is smart, but is no transhuman (yet). If he got out, then any strong AI will.

Minor emendation: replace "would"/"will" above with "could (and for most non-Friendly goal systems, would)".