I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.
Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:
Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:
I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)
Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:
As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.
That does seem like the tentative consensus, and I was unpleasantly surprised to see someone on LW who would not chew the gum.
We should be asking what decision procedure gives us more money, e.g. if we're writing a computer program to make a decision for us. You may be tempted to say that if Omega is physical - a premise not usually stated explicitly, but one I'm happy to grant - then it must be looking at some physical events linked to your action and not looking at the answer given by your abstract decision procedure. A procedure based on that assumption would lead you to two-box. This thinking seems likely to hurt you in analogous real-life situations, unless you have greater skill at lying or faking signals than (my model of) either a random human being or a random human of high intelligence. Discussing it, even 'anonymously', would constitute further evidence that you lack the skill to make this work.
Now TDT, as I understand it, assumes that we can include in our graph a node for the answer given by an abstract logical process. For example, to predict the effects of pushing some buttons on a calculator, we would look at both the result of a "timeless" logical process and also some physical nodes that determine whether or not the calculator follows that process.
Let's say you have a similar model of yourself. Then if and only if your model of the world says that the abstract answer given by your decision procedure does not sufficiently determine Omega's action, then a counterfactual question about that answer will tell you to two-box. But if Omega when examining physical evidence just looks at the physical nodes which (sufficiently) determine whether or not you will use TDT (or whatever decision procedure you're using), then presumably Omega knows what answer that process gives, which will help determine the result. A counterfactual question about the logical output would then tell you to one-box. TDT I think asks that question and gets that answer. UDT I barely understand at all.
(The TDT answer to the OP's problem depends on how we interpret "two-boxing gene".)