Jeremy, I think the apparent disagreement here is due to unclarity about what the point of my argument was. The point was *not* that this situation can't be analyzed with decision theory; it certainly can, and I did so. The point is that different decisions have to be made in two situations where *the probabilities* are the same.

Your discussion seems to equate "probability" with "utility", and the whole point of the example is that, in this case, they are not the same.

To me the part that stands out the most is the computation of P() by the AI.

From this description, it seems that P is described as essentially omniscient. It knows the locations and velocity of every particle in the universe, and it has unlimited computational power. Regardless of whether possessing and computing with such information is possible, the AI will model P as being literally omniscient. I see no reason that P could not hypothetically reverse the laws of physics and thus would always return 1 or 0 for any statement about reality.

Of course, you could add noise to the inputs to P, or put a strict limit on P's computational power, or model it as a hypothetical set of sensors which is very fine-grained but not omniscient, but this seems like another set of free variables in the model, in addition to lambda, which could completely undo the entire setup if any were set wrong, and there's no natural choice for any of them.