JoshuaZ comments on Is friendly AI "trivial" if the AI cannot rewire human values? - Less Wrong

-5 Post author: Alerus 09 May 2012 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alerus 09 May 2012 09:00:55PM *  0 points [-]

What is wrong with the statement? The idea I'm trying to portray is that I as a person now, cannot go and forcefully rewire another person's values. The only ability I have to try an affect them is to be persuasive in argument or perhaps being deceptive about certain things to try and get them to a different position (e.g., consider the state of politics).

In contrast, one of the concerns for the future is that an AI may have the technological ability to more directly manipulate a person. So the question I'm asking is: is the future technology at the disposal of an AI the only reason it could behave "badly?" under such a utility function?

Also, please avoid such comments. I am interested in having this discussion, but alluding to finding something wrong in what I have posted and not saying what you think it is, is profoundly unhelpful and useless to discussion.

Comment author: drethelin 09 May 2012 09:23:15PM 4 points [-]

Consider that humans have modified human values to results as different as nazism and as jainism.

Comment author: DanArmak 09 May 2012 09:36:15PM 3 points [-]

Consider that every human who ever existed, was shaped purely by environment + genes.

Consider how much humans have achieved merely by controlling the environment: converting people to insane religions which they are willing to die and kill for, making torturers, "the banality of evil", etc. etc.

Now imagine what an entity could achieve with that plus 1) complete understanding of how the brain is shaped by the environment and/or 2) complete control of the environment (via VR, smart dust, whatever) for a human from age 0 onwards.

I think the conservative assumption is that any mind we would recognize as human, and many we wouldn't, could be produced by such an optimization process. You're not limiting your AI at all.