shokwave comments on Can anyone explain to me why CDT two-boxes? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
If you ask a mathematician to find 0x + 1 for x = 3, they will answer 1. If you then ask the mathematician to find the 10th root of the factorial of the eighth Mersenne prime, multiplied by zero, plus one, they will answer 1. You may protest they didn't actually calculate the eighth Mersenne prime, find its factorial, or calculate the tenth root of that, but you can't deny they gave the right answer.
If you put CDT in a room with a million dollars in Box A and a thousand dollars in Box B (no Omega, just the boxes), and give it the choice of either A or both, it will take both, and walk away with one million and one thousand dollars. If you explain this whole Omega thing to CDT, then put it in the room, it will notice that it doesn't actually need to calculate the eighth Mersenne prime, etc, because when Omega leaves you are effectively multiplying by zero - all the fancy simulating is irrelevant because the room is just two boxes that may contain money, and you can take both.
Yes, CDT doesn't think it's playing Newcomb's Puzzle, it thinks it's playing "enter a room with money".
You're completely right, except that (assuming I understand you correctly) you're implying CDT only thinks it's playing "room with money", while in reality it would be playing Newcomb.
And that's the issue; in reality Newcomb cannot exist, and if in theory you think you're playing something, you are playing it.
Does that make sense?
Perfect sense. Theorising that CDT would lose because it's playing a different game is uninteresting as a thought experiment; if I theorise that any decision theory is playing a different game it will also lose; this is not a property of CDT but of the hypothetical.
Let's turn to the case of playing in reality, as it's the interesting one.
If you grant that Newcomb paradoxes might exist in reality, then there is a real problem: CDT can't distinguish between free money boxes and Newcomb paradoxes, so so when it encounters a Newcomb situation it underperforms.
If you claim Newcomb cannot exist in reality, then this is not a problem with CDT. I (and hopefully others, though I shan't speak for them) would accept that this is not a problem with CDT if it is shown that Newcomb's is not possible in real life - but we are arguing against you here because we think Newcomb is possible. (Okay, I did speak for them).
I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction), and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).
No, CDT can in fact distinguish very well. It always concludes that the money is there, and it is always right, because it never encounters Newcomb.
To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.
If you're talking about empirical Newcomb, that certainly is possible, but it is impossible to do better than CDT without choosing differently in other situations, because if you've acted like CDT in the past, Omega is going to assume you are CDT, even if you're not.
I agree on the "we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction" part, but this will change what the "correct" answer is. For example, you could substitute Omega with a coin toss and repeat the game if Omega is wrong. This is still a one-time problem, because Omega is a coin and therefore has no memory, but CDT, which would two-box in empirical Newcomb, one-boxes in this case and takes the $1,000,000.
I don't think this is the point of contention, but after we've settled that, I would be interested in hearing your line of thought on this.
How about the version where agents are computer programs, and Omega runs a simulation of the agent facing the choice, observes it's behavior, and fills the boxes accordingly?
I see no violation of causality in that version.