For what it's worth, this formulation appears to me substantially more confusing than the ordinary Counterfactual Mugging. It requires a hypothetical world with multiple confusing features (time travel! prophets! prophecies that are absolutely inevitable .. except, wait, no they aren't! or maybe they are but just might not have been!) And for extra confusion, you introduce the idea that I might believe the prophecy immutable when in fact it isn't, while (if I'm understanding right) asking me just to take on trust that in the real world (er, the real world of this hypothetical situation) it really truly definitely is immutable.
The ordinary Counterfactual Mugging is hard to think about, but (at least for me) it's reasonably clear what situation it's describing, whereas here I had to read your description several times before I was confident I'd correctly understood the problem statement (and I'm still not quite certain I have).
I'm also not sure this is equivalent to ordinary CM (is it meant to be?). Ordinary CM says there was a 50% chance of Omega's coin flip coming up either way, but here nothing seems quite to correspond to that. In particular, the 50% reduction in Pr(I perform the unwise action) in your scenario doesn't seem like it plays quite the same role. But maybe I'm misunderstanding something?
Well, your confusion means my original goal has failed, and I suppose that's that. I am pretty sure this is equivalent to CM in the sense that only UDT wins -- I'd be happy to explain further if you'd like, but otherwise, thanks for your help!
Edit as of June 13th, 2016: I no longer believe this to be easier to understand than traditional CM, but stand by the rest of it. Minor aesthetic edits made.
First post on the LW discussion board. Not sure if something like this has already been written, need your feedback to let me know if I’m doing something wrong or breaking useful conventions.
An alternative to the counterfactual mugging, since people often require it explained a few times before they understand it -- this one I think will be faster for most to comprehend because it arose organically, not seeming specifically contrived to create a dilemma between decision theories:
Pretend you live in a world where time travel exists and Time can create realities with acausal loops, or of ordinary linear chronology, or another structure, so long as there is no paradox -- only self-consistent timelines can be generated.
In your timeline, there are prophets. A prophet (known to you to be honest and truly prophetic) tells you that you will commit an act which seems horrendously imprudent or problematic. It is an act whose effect will be on the scale of losing $10,000; an act you never would have taken ordinarily. But fight the prophecy all you want, it is self-fulfilling and you definitely live in a timeline where the act gets committed. However, if it weren’t for the prophecy being immutably correct, you could have spent $100 and, even having heard the prophecy (even having believed it would be immutable) the probability of you taking that action would be reduced by, say, 50%. So fighting the prophecy by spending $100 would mean that there were 50% fewer self-consistent (possible) worlds where you lost the $10,000, because its just much less likely for you to end up taking that action if you fight it rather than succumbing to it.
You may feel that there would be no reason to spend $100 averting a decision that you know you’re going to make, and see no reason to care about counterfactual worlds where you don’t lose the $10,000. But the fact of the matter is that if you could have precommitted to fight the choice you would have, because in the worlds where that prophecy could have been presented to you, you’d be decreasing the average disutility by (($10,000)(.5 probability) - ($100) = $4,900). Not following a precommitment that you would have made to prevent the exact situation which you’re now in because you wouldn’t have followed the precommitment seems an obvious failure mode, but UDT successfully does the same calculation shown above and tells you to fight the prophecy. The simple fact that should tell causal decision theorists that converting to UDT is the causally optimal decision is that Updateless Decision Theorists actually do better on average than CDT proponents.
(You may assume also that your timeline is the only timeline that exists, so as not to further complicate the problem by your degree of empathy with your selves from other existing timelines.)