Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.
A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.
Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.
The fundamental problem behind Omega is how to resolve a claim by a perfect predictor that includes a decision you and you alone are responsible for making. This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter. I care about how you will act. What action will you take? However you label the source of these actions is your prerogative. The question doesn't care how you got there; it cares about the answer.
My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.
A possible objection to the scenario is that the prediction itself is impossible to make. If Omega is a perfect predictor it follows that it would never make an impossible prediction and the prediction "you will give Omega $5" is impossible. This is invalid, however, as long as you can think of at least one scenario where you have a good reason to give Omega $5. Omega would show up in that scenario and ask for $5.
If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.
This question is essentially the same as saying, "If you have a good reason to give Omega $5 then you will give Omega $5." It should be a completely uninteresting, obvious question. It holds some implications on other scenarios involving Omega but those are for another time. Those implications should have no bearing on the answer to this question.
Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.
The set-up of the scenario ("Suppose that Omega, etc.") presupposes some things. The question "What do you do?" presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.
Specifically, the question "What do you do" presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. ("You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!")
The presuppositions of "What do you do" lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says "You will now give me $5", and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn't in fact the Omega about which the Tales of Newcomb were written, since he's just been shown up as an imperfect predictor.
The presuppositions carried by "Omega is a perfect predictor" are of an entirely different order. Logically, whatever predictions Omega makes will in fact turn out to have been correct. But these presuppositions simply don't match up with those of the "What do you do?" question, in which what determines your behaviour is only the ordinary facts of the world as you know it, plus whatever facts are contained in the scenario that constitutes the set-up of the question.
If Omega is a perfect predictor, all we have is a possible world history, where Omega at some time t appears, makes a prediction, and at some time t' that prediction has been fulfilled. There is no call to ask a "What do you do" question. The answers are laid out in the specification of the world history.
One-boxing is the correct choice in the original problem, because we are asked to say in which of two world-histories we walk away with $1M, and given the stipulation that there exist no world-histories to choose from in wich we walk away with $1M and two boxes. We're just led astray by the pragmatics of "What do you do?".
[EDIT: in case it isn't clear, and because you said you were curious what people thought the obvious answer was, I think the obvious answer is "get lost"; similarly the obvious answer to the original problem is "I take the two boxes". The obvious answer just happens to be the incorrect choice. I have changed the paragraph just previous to say "the correct choice" instead of "the correct answer".
Also, in the previous paragraph I assume I want the $1M, and it is that which makes one-boxing the correct choice. Of course it's presented as a free-will question, that is, one in which more than one possible world-history is available, and so I can't rule out unlikely worlds in which I want the $1M but mistakenly pick the wrong world-history.]
Recording an oops: when I wrote the above I didn't really understand Newcomb's Problem. I retract pretty much all of the above comment.
I'm now partway through Gary Drescher's Good and Real and glad that it's given me a better handle on Newcomb, and that I can now classify my mistake (in my above description of the "original problem") as "evidentialist".