XiXiDu comments on The curse of identity - Less Wrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 17 November 2011 01:48:17PM *  2 points [-]

I don't understand why you call this a problem. If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency. So we have a situation where people want something and choose an efficient way to get it. Isn't that good?

More directly, I'm confused how you can look at an organism, see that it uses its optimization power in a goal-oriented and efficient way (status gains in this case) and call that problematic, merely because some of these organisms disagree that this is their actual goal. What would you want them to do - be honest and thus handicap their status seeking?

Say you play many games of Diplomacy against an AI, and the AI often promised you to be loyal, but backstabbed you many times to its advantage. You look at the AI's source code and find out that it has backstabbing as a major goal, but the part that talks to people isn't aware of that so that it can lie better. Would you say that the AI is faulty? That it is wrong and should make the talking module aware of its goals, even though this causes it to make more mistakes and thus lose more? If not, why do you think humans are broken?

Comment author: Vladimir_Nesov 17 November 2011 03:39:43PM 7 points [-]

If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency.

"Efficiency" at achieving something other than what you should work towards is harmful. If it's reliable enough, let your conscious mind decide if signaling advantages or something else is what you should be optimizing. Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.

Comment author: XiXiDu 17 November 2011 05:02:04PM *  1 point [-]

"Efficiency" at achieving something other than what you should work towards is harmful. ... Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.

The purpose of solving friendly AI is to protect the purposes picked for us by the blind idiot god.

Comment author: Vladimir_Nesov 17 November 2011 06:28:56PM *  9 points [-]

Our psychological adaptations are not our purposes, we don't want to protect them, even though they contribute to determining what it is we want to protect. See Evolutionary Psychology.