Do you see how this scenario rules out the possibility of me deciding rationally?
EDIT: In fact, let me explain now, before you answer, give me a sec and I'll re-edit
EDIT2: If the rational decision is to two-box, and Omega has set me to one-box, then I must not be deciding rationally. Correct?
If the rational decision is to one-box, and Omega has set me to two-box, then I must not be deciding rationally. Correct?
Now, assuming I will not decide rationally, as I know I will not, I need waste no time thinking. I'll do whichever I feel like.
You can substitute "the laws of physics" for "Omega" in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide --anything-- rationally in real life.
This is part of a sequence titled "An introduction to decision theory". The previous post was Newcomb's Problem: A problem for Causal Decision Theories
For various reasons I've decided to finish this sequence on a seperate blog. This is principally because there were a large number of people who seemed to feel that this sequence either wasn't up to the Less Wrong standard or felt that it was simply covering ground that had already been covered on Less Wrong.
The decision to post it on another blog rather than simply discontinuing it came down to the fact that other people seemed to feel that the sequence had value. Those people can continue reading it at "The Smoking Lesion: A problem for evidential decision theory".
Alternatively, there is a sequence index available: Less Wrong and decision theory: sequence index