You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

thomblake comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong Discussion

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 30 April 2012 07:38:48PM 3 points [-]

As far as I can tell, it's pretty common for moral realists. More or less, the argument goes:

  • Morality is just what one ought to do, so anyone not suffering from akrasia that is correct about morality will do the moral thing
  • A superintelligence will be better than us at knowing facts about the world, like morality
  • (optional) A superintelligence will be better than us at avoiding akrasia
  • Therefore, a superintelligence will behave more morally than us, and will eventually converge on true morality.
Comment author: JGWeissman 30 April 2012 07:47:00PM 8 points [-]

So, the moral realists believe a superintelligence will converge on true morality. Do they also believe that superintelligence is controllable? I had thought they would believe that superintelligence is uncontrollable, but approve of whatever it uncontrollably does.

Comment author: thomblake 30 April 2012 09:12:01PM 3 points [-]

Ah, I missed that clause. Yes, that.