JGWeissman comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
That seems obviously true, but what are your motivations for stating it? I was under the impression that people who make the claim accept the conclusion, think it's a good thing, and want to build an AI smart enough to find the "true universal morality" without worrying about all that Friendliness stuff.
It's useful for hitting certain philosophers with. Canonical examples: moral realists sceptical of the potential power of AI.
There are philosophers who believe that any superintelligence will inevitably converge to some true code of morality, and that superintelligence is controllable? Who?
As far as I can tell, it's pretty common for moral realists. More or less, the argument goes:
So, the moral realists believe a superintelligence will converge on true morality. Do they also believe that superintelligence is controllable? I had thought they would believe that superintelligence is uncontrollable, but approve of whatever it uncontrollably does.
Ah, I missed that clause. Yes, that.
Quite a few I know (not naming names, sorry!) who haven't thought through the implications. Hell, I've only put the two facts together recently in this form.