Occasionally a wrong idea still leads to the right outcome. We know that one-boxing on Newcomb's problem is the right thing to do. Timeless decision theory proposes to justify this action by saying: act as if you control all instances of your decision procedure, including the instance that Omega used to predict your behavior.
But it's simply not true that you control Omega's actions in the past. If Omega predicted that you will one-box and filled the boxes accordingly, that's because, at the time the prediction was made, you were already a person who would foreseeably one-box. One way to be such a person is to be a TDT agent. But another way is to be a quasi-CDT agent with a superstitious belief that greediness is punished and modesty is rewarded - so you one-box because two-boxing looks like it has the higher payoff!
That is an irrational belief, yet it still suffices to generate the better outcome. My thesis is that TDT is similarly based on an irrational premise. So what is actually going on? I now think that Newcomb's problem is simply an exceptional situation where there is an artificial incentive to employ something other than CDT, and that most such situations can be dealt with by being a CDT agent who can self-modify.
Eliezer's draft manuscript on TDT provides another example (page 20): a godlike entity - we could call it Alphabeta - demands that you choose according to "alphabetical decision theory", or face an evil outcome. In this case, the alternative to CDT that you are being encouraged to use is explicitly identified. In Newcomb's problem, no such specific demand is made, but the situation encourages you to make a particular decision - how you rationalize it doesn't matter.
We should fight the illusion that a TDT agent retrocausally controls Omega's choice. It doesn't. Omega's choice was controlled by the extrapolated dispositions of the TDT agent, as they were in the past. We don't need to replace CDT with TDT as our default decision theory, we just need to understand the exceptional situations in which it is expedient to replace CDT with something else. TDT will apply to some of those situations, but not all of them.
This is always so, there are details absent from any incomplete model, whose state can decide the outcome as easily as your decision. Gaining knowledge about those details allows to improve the decision, but absent that knowledge the only thing to do is to figure out what the facts you do know suggest.
If people use this consideration to consistently beat Omega, its accuracy can't be 90%. Therefore, in that case, they can't beat Omega with this argument, proof by contradiction.
(If they don't use this consideration, then you could win, but this hypothetical is of little use if you don't know that. For example, it seems like a natural correction to specify that you only know the figure 90% and not correlation between the correctness of guesses and the test subjects' properties, and that the people sampled for this figure were not different from you in any way that you consider relevant for this problem.)
If no facts about the nature of the "noise" is specified, then the phrase "probability of correct decision by Omega is 0.9" does not make sense. It does not add any knowledge beyond "sometimes Omega makes mistakes".
... (read more)