1 min read

3

This is a special post for quick takes by ektimo. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
8 comments, sorted by Click to highlight new comments since:

Should you cooperate with your almost identical twin in the prisoner's dilemma? 

The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn't favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they'll think "ha, this sounds like somebody I can take advantage of" or "reason dictates I must defect" then I wouldn't cooperate with them).

physically similar they are, it's how similar their logical thinking is.

A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity.  Mostly I see people talking about "copies" or "clones", rather than "human twins".  For prisoner's dilemma, the question is "will they make the same decision I will", and for twins raised together, the answer seems more likely to be yes than for strangers.  

Note that your examples of thinking are PROBABLY symmetrical - if you don't think (or don't act on) "ha! this is somebody I can take advantage of", they are less likely to as well.  In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too.  In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.

Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.

There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.

(This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)

How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?

(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)

Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let's handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one "stream of consciousness" quantum).

If you could make a change would you prioritize:

  1. Pruning the instances to reduce negative experiences
  2. Being able to press the button lots of times
  3. Making the experiences more real (For example an experience could be "one instant of reminiscing over my memories of building a Dyson Sphere" but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
  4. This is horrible, stop the train I want to get off.

(I'd probably go with 4 but curious if people have different opinions.)

This thought experiment is so far outside any experience-able reality that no answer is likely to make any sense.