Should you cooperate with your almost identical twin in the prisoner's dilemma?
The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn't favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they'll think "ha, this sounds like somebody I can take advantage of" or "reason dictates I must defect" then I wouldn't cooperate with them).
physically similar they are, it's how similar their logical thinking is.
A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity. Mostly I see people talking about "copies" or "clones", rather than "human twins". For prisoner's dilemma, the question is "will they make the same decision I will", and for twins raised together, the answer seems more likely to be yes than for strangers.
Note that your examples of thinking are PROBABLY symmetrical - if you don't think (or don't act on) "ha! this is somebody I can take advantage of", they are less likely to as well. In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too. In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.
Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.
(This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)
We can be virtually certain that 2+2=4 based on priors. This is because it's true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I'm pretty sure that one doesn't exist anyway.
We can be virtually certain that 2+2=4 based on priors.
I don't understand this model. For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability. For others, it may be "just" a probability, about which they might be virtually certain about, but it won't be on priors, it'll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause).
There's no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes. Because there's no way to GET evidence from them.
I meant this as a joke since if there's one universe that contains all the other universes since it isn't limited by logic, and that one doesn't exist then that would mean I don't exist either and wouldn't have been able to post this. (Unless I only sort-of exist in which case I'm only sort-of joking.)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let's handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one "stream of consciousness" quantum).
If you could make a change would you prioritize:
(I'd probably go with 4 but curious if people have different opinions.)
This thought experiment is so far outside any experience-able reality that no answer is likely to make any sense.