You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong Discussion

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: TheAncientGeek 30 September 2013 05:27:10PM 1 point [-]

There's a certain probability that it would do the right thing anyway, a certain probability that it wouldn't and so on. The probability of an AGI turning unfriendly depends on those other probabilities, although very little attention has been given to moral realism/objectivism/convergence by MIRI.