Occasionally a wrong idea still leads to the right outcome. We know that one-boxing on Newcomb's problem is the right thing to do. Timeless decision theory proposes to justify this action by saying: act as if you control all instances of your decision procedure, including the instance that Omega used to predict your behavior.
But it's simply not true that you control Omega's actions in the past. If Omega predicted that you will one-box and filled the boxes accordingly, that's because, at the time the prediction was made, you were already a person who would foreseeably one-box. One way to be such a person is to be a TDT agent. But another way is to be a quasi-CDT agent with a superstitious belief that greediness is punished and modesty is rewarded - so you one-box because two-boxing looks like it has the higher payoff!
That is an irrational belief, yet it still suffices to generate the better outcome. My thesis is that TDT is similarly based on an irrational premise. So what is actually going on? I now think that Newcomb's problem is simply an exceptional situation where there is an artificial incentive to employ something other than CDT, and that most such situations can be dealt with by being a CDT agent who can self-modify.
Eliezer's draft manuscript on TDT provides another example (page 20): a godlike entity - we could call it Alphabeta - demands that you choose according to "alphabetical decision theory", or face an evil outcome. In this case, the alternative to CDT that you are being encouraged to use is explicitly identified. In Newcomb's problem, no such specific demand is made, but the situation encourages you to make a particular decision - how you rationalize it doesn't matter.
We should fight the illusion that a TDT agent retrocausally controls Omega's choice. It doesn't. Omega's choice was controlled by the extrapolated dispositions of the TDT agent, as they were in the past. We don't need to replace CDT with TDT as our default decision theory, we just need to understand the exceptional situations in which it is expedient to replace CDT with something else. TDT will apply to some of those situations, but not all of them.
No, it's the same subjective experience. It's like an exact copy of an upload being made, then being run for 5 minutes in parallel with the original, and then terminated. There's no difference in utilities.
The actual way in which Omega makes its decision doesn't matter. What matters is the agent's knowledge that Omega always guesses right. If this is the case, then the agent can always represent Omega as a perfect simulator.
The logic doesn't work if the agent knows that Omega guesses right only with some probability p<1. But if this is the case, then the problem looks underspecified. The correct decision depends on exact nature of the noise. If Omega makes the decision by analyzing the agent's psychological tests taken in childhood, then the agent should two-box. And if Omega makes a perfect simulation and then adds random noise, the agent should one-box.
This is always so, there are details absent from any incomplete model, whose state can decide the outcome as easily as your decision. Gaining knowledge about those details allows to improve the decision, but absent that knowledge the only thing to do is to figure out what the facts you do know suggest.
If peop... (read more)