You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong Discussion

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: XiXiDu 30 April 2012 03:17:37PM *  5 points [-]

...if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence.

...if you claim that any superintelligent oracle will inevitably return the same answer given the same question, then you are also claiming that no measures can be taken by its creators to make it return a different answer.

Comment author: khafra 30 April 2012 04:07:45PM 3 points [-]

Sounds uncontroversial, to me. I wouldn't expect to be able to create a non-broken AI, even a comparitively trivial one, that thinks 1+1=3. On the other hand, I do think I could create comparitively trivial AIs that leverage their knowledge of arithmetic to accomplish widely varying ends. Simultaneous Location and Mapping, for example, works for a search and rescue bot or a hunt/kill bot.

Comment author: Stuart_Armstrong 30 April 2012 06:14:26PM *  3 points [-]

Not exactly true... You need to conclude "can be taken by its creators to make it return a different answer while it remains an Oracle". With that caveat inserted, I'm not sure what your point is... Depending on how you define the terms, either your implication is true by definition, or the premise is agreed to be false by pretty much everyone.

Comment author: XiXiDu 30 April 2012 06:32:41PM 3 points [-]

You need to conclude "can be taken by its creators to make it return a different answer while it remains an Oracle". With that caveat inserted, I'm not sure what your point is...

That was my point. If you accept the premise that superintelligence implies the adoption some sort of objective moral conduct, then it is no different from an oracle returning correct answers. You can't change that behavior and retain superintelligence. You'll end up with a retarded intelligence.

I was just stating an analog example that highlights the tautological nature of your post. But I suppose that was your intention anyway.

Comment author: Stuart_Armstrong 30 April 2012 06:58:53PM 3 points [-]

Ah, ok :-) It just felt it was pulling intuitions in a different direction!