TheAncientGeek comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
There's a certain probability that it would do the right thing anyway, a certain probability that it wouldn't and so on. The probability of an AGI turning unfriendly depends on those other probabilities, although very little attention has been given to moral realism/objectivism/convergence by MIRI.