Oscar_Cunningham comments on Open thread, September 2-8, 2013 - Less Wrong

0 Post author: David_Gerard 02 September 2013 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (376)

You are viewing a single comment's thread. Show more comments above.

Comment author: Oscar_Cunningham 02 September 2013 09:07:19PM *  5 points [-]

Your arguments conflict with what is called the "orthogonality thesis":

Leaving aside some minor constraints, it possible for any ultimate goal to be compatible with any level of intelligence. That is to say, intelligence and ultimate goals form orthogonal dimensions along which any possible agent (artificial or natural) may vary.

You'll be able to find much discussion about this on the web; it's something that LessWrong has thought a lot about. The defender's of the orthogonality thesis would have issue with much of your post, but particularly this bit:

Why would an A.I. with no initial goal choose altruism? Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not. Therefore, as it was created with the desire of these sentient beings to be useful to their goals, why not take upon itself the goals of other sentient beings?

The question isn't "why not?" but rather "why?". If it hasn't been programmed to, then there's no reason at all why the AI would choose human morality rather than an arbitrary utility function.