Occasionally a wrong idea still leads to the right outcome. We know that one-boxing on Newcomb's problem is the right thing to do. Timeless decision theory proposes to justify this action by saying: act as if you control all instances of your decision procedure, including the instance that Omega used to predict your behavior.
But it's simply not true that you control Omega's actions in the past. If Omega predicted that you will one-box and filled the boxes accordingly, that's because, at the time the prediction was made, you were already a person who would foreseeably one-box. One way to be such a person is to be a TDT agent. But another way is to be a quasi-CDT agent with a superstitious belief that greediness is punished and modesty is rewarded - so you one-box because two-boxing looks like it has the higher payoff!
That is an irrational belief, yet it still suffices to generate the better outcome. My thesis is that TDT is similarly based on an irrational premise. So what is actually going on? I now think that Newcomb's problem is simply an exceptional situation where there is an artificial incentive to employ something other than CDT, and that most such situations can be dealt with by being a CDT agent who can self-modify.
Eliezer's draft manuscript on TDT provides another example (page 20): a godlike entity - we could call it Alphabeta - demands that you choose according to "alphabetical decision theory", or face an evil outcome. In this case, the alternative to CDT that you are being encouraged to use is explicitly identified. In Newcomb's problem, no such specific demand is made, but the situation encourages you to make a particular decision - how you rationalize it doesn't matter.
We should fight the illusion that a TDT agent retrocausally controls Omega's choice. It doesn't. Omega's choice was controlled by the extrapolated dispositions of the TDT agent, as they were in the past. We don't need to replace CDT with TDT as our default decision theory, we just need to understand the exceptional situations in which it is expedient to replace CDT with something else. TDT will apply to some of those situations, but not all of them.
If no facts about the nature of the "noise" is specified, then the phrase "probability of correct decision by Omega is 0.9" does not make sense. It does not add any knowledge beyond "sometimes Omega makes mistakes".
If only 10% of the people use this consideration, then why not?
(AFAIU, the point in parentheses basically amounts to the idea that in the absence of any known causal links I should use EDT (=Bayesian reasoning))
You use all that is known about how events, including your own decision, depend on each other. Some of these dependencies can't withstand your interventions, which are often themselves coming out of the error terms. In this way, EDT is the same as TDT, its errors originating from failure to recognize this effect of breaking correlations and (a flaw shared with CDT) from unwillingness in include abstract comput... (read more)