Kaj_Sotala comments on The curse of identity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (296)
I don't understand why you call this a problem. If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency. So we have a situation where people want something and choose an efficient way to get it. Isn't that good?
More directly, I'm confused how you can look at an organism, see that it uses its optimization power in a goal-oriented and efficient way (status gains in this case) and call that problematic, merely because some of these organisms disagree that this is their actual goal. What would you want them to do - be honest and thus handicap their status seeking?
Say you play many games of Diplomacy against an AI, and the AI often promised you to be loyal, but backstabbed you many times to its advantage. You look at the AI's source code and find out that it has backstabbing as a major goal, but the part that talks to people isn't aware of that so that it can lie better. Would you say that the AI is faulty? That it is wrong and should make the talking module aware of its goals, even though this causes it to make more mistakes and thus lose more? If not, why do you think humans are broken?
For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.
Also, mismatches between our consciously-held goals and our behavior cause plenty of frustration and unhappiness, like in the case of the person who keeps stressing out because their studies don't progress.
Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.
Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?
What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.
In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don't think that you could convincingly demonstrate it to be a relatively undesirable human activity.
Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we're living in a perfect utopia. But that's not very relevant if we want to discuss the world as it is today.
It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.
If we're talking about Friendly AI design, sure. I wasn't.