muflax comments on The curse of identity - Less Wrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 17 November 2011 01:48:17PM *  2 points [-]

I don't understand why you call this a problem. If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency. So we have a situation where people want something and choose an efficient way to get it. Isn't that good?

More directly, I'm confused how you can look at an organism, see that it uses its optimization power in a goal-oriented and efficient way (status gains in this case) and call that problematic, merely because some of these organisms disagree that this is their actual goal. What would you want them to do - be honest and thus handicap their status seeking?

Say you play many games of Diplomacy against an AI, and the AI often promised you to be loyal, but backstabbed you many times to its advantage. You look at the AI's source code and find out that it has backstabbing as a major goal, but the part that talks to people isn't aware of that so that it can lie better. Would you say that the AI is faulty? That it is wrong and should make the talking module aware of its goals, even though this causes it to make more mistakes and thus lose more? If not, why do you think humans are broken?

Comment author: Grognor 17 November 2011 02:59:52PM *  1 point [-]

Would you say that the AI is faulty?

Yes. It might be doing exactly what it was designed to do, but its designer was clearly stupid or cruel and had different goals than I'd prefer the AI to have.

Extrapolate this to humans. Humans wouldn't care so much about status if it weren't for flaws like scope insensitivity, self-serving bias, etc., as well as simply poor design "goals".

Comment author: [deleted] 17 November 2011 03:38:32PM -1 points [-]

Yes. It might be doing exactly what it was designed to do, but its designer was clearly stupid or cruel and had different goals than I'd prefer the AI to have.

Where are you getting your goals from? What are you, except your design? You are what Azathoth build. There is no ideal you that you should've become, but which Azathoth failed to make.

Comment author: Grognor 17 November 2011 03:46:36PM 1 point [-]

Azathoth designed me with conflicting goals. Subconsciously, I value status, but if I were to take a pill that made me care entirely about making the world better and nothing else, I would. Just because "evolution" built that into me doesn't make it bad, but it definitely did not give me a coherent volition. I have determined for my self which parts of humanity's design are counterproductive, based on the thousand shards of desire.

Comment author: fubarobfusco 18 November 2011 05:11:43AM 2 points [-]

if I were to take a pill that made me care entirely about making the world better and nothing else, I would.

Would you sign up to be tortured so that others don't suffer dust specks?

("If we are here to make others happy, what are the others here for?")

Comment author: Gabriel 19 November 2011 02:44:21AM 1 point [-]

A better analogy would be asking about a pill that caused pain asymbolia.

Comment author: Grognor 18 November 2011 10:45:09AM 1 point [-]

Yes.