I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.
Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:
Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:
I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)
Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:
As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.
"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.
As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).
But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.
I was referring to "in principle", not to reality.
Yes. I think that if I couldn't do that, it wouldn't be me. If we don't permit people without the two-boxing gene to two-box (the question as originally written did, but we don't have to), then this isn't a game I can possibly be offered. You can't take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it's the wrong way, and say... (read more)