I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.
Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:
Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:
I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)
Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:
As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.
I think this is addressed by my top level comment about determinism.
But if you don't see how it applies, then imagine an AI reasoning like you have above.
"My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me."
Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.
So I think where we differ is that I don't believe in a gene that controls my decision in the same way that you do. I don't know how well I can articulate myself, but:
As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn't responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I'll do, then there are no worlds where Omega thinks I'll one-box, but I actually two-box.
But imagine that all AIs have a constant variable in their source code, unhelpfully named... (read more)