It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.
(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being's predictive powers.)
Imagine a Prisoner's Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?
I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.
Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:
Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:
I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)
Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:
As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.