You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bogus comments on A few misconceptions surrounding Roko's basilisk - Less Wrong Discussion

39 Post author: RobbBB 05 October 2015 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 09 October 2015 07:58:34PM 1 point [-]

I think the reason to cooperate is not to get the best personal outcome, but because you care about the other person.

If you have 100% identical consequentialist values to all other humans, then that means 'cooperation' and 'defection' are both impossible for humans (because they can't be put in PDs). Yet it will still be correct to defect (given that your decision and the other player's decision don't strongly depend on each other) if you ever run into an agent that doesn't share all your values. See The True Prisoner's Dilemma.

This shows that the iterated dilemma and the dilemma-with-common-knowledge-of-rationality allow cooperation (i.e., giving up on your goal to enable someone else to achieve a goal you genuinely don't want them to achieve), whereas loving compassion and shared values merely change goal-content. To properly visualize the PD, you need an actual value conflict -- e.g., imagine you're playing against a serial killer in a hostage negotiation. 'Cooperating' is just an English-language label; the important thing is the game-theoretic structure, which allows that sometimes 'cooperating' looks like letting people die in order to appease a killer's antisocial goals.

Comment author: bogus 09 October 2015 08:44:34PM *  0 points [-]

If you have 100% identical consequentialist values to all other humans, then that means 'cooperation' and 'defection' are both impossible for humans (because they can't be put in PDs). ... To properly visualize the PD, you need an actual value conflict

True, but the flip side of this is that efficiency (in Coasian terms) is precisely defined as pursuing 100% identical consequentialist values, where the shared "values" are determined by a weighted sum of each agent's utility function (and the weights are typically determined by agent endowments).