Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.
The set-up of the scenario ("Suppose that Omega, etc.") presupposes some things. The question "What do you do?" presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.
Specifically, the question "What do you do" presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. ("You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!")
The presuppositions of "What do you do" lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says "You will now give me $5", and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn't in fact the Omega abo...
If we agree to treat "Omega predicts X" as being equivalent to "X is true", then "Suppose Omega predicts that you'll give it $5" means "Suppose that you'll give Omega $5". Then, the question
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
becomes
Suppose that you will give Omega $5. What will you do?
If whenever Omega predicts I will give it $5, I don't give it $5, then I will never observe Omega predicting I will give it $5, which I don't want to happen. Therefore, I don't give the $5. If Omega makes the prediction anyways, this is a problem with Omega, not my decision.
I like this article, but agree the title is off. Perhaps "My Fundamental Question about Omega" or even "Omega: I Just Don't Get It" would be more karma-encouraging. I suspect that at least some people (not me) are taking the current title to mean that you have some sort of new mathematical proof about TDT and then are voting you down in disappointment when they see this. ;-)
[Edit to add, for latecomers: the post I'm replying to was originally titled "The Fundamental Problem Behind Omega"]
This question is essentially the same as saying, "If you have a good reason to give Omega $5 then you will give Omega $5."
The statement also seems to be just like, "If Omega has good reason to predict that you will give it $5, you will give it $5."
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do?
Maul the prick with a sock filled with 500 pennies.
Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it.
(a) is correct. (b) does not apply, in many cases Omega is a benefactor, but can be used in scenarios where Omega causes a net harm. The important point is that Omega is perfectly honest, the rules of the scenario are exactly what Omega says they are.
This may be too trivial for here, but I just watched a Derren Brown show on Channel 4. I think it's very likely that he could do a stage show in which he plays the part of Omega and consistently guesses correctly, and if that were to happen, I'd love to know whether those who one-box or two-box when faced with Omega would make the same decision when faced with Derren Brown. I would one-box.
YD = Your decision.
F --> OP
Your decision does not bootstrap itself out of nothing; it is a function of F. All causality here is forwards in time. By the definition of Omega, OP and YD always match, and the causality chain is self-consistent, for a single timeline. Most confusion that I have seen around Omega or Newcomb seems to be confusion about at least one of these things.
My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.
I agree with that. I don't expect a perfect predictor to make that prediction, though, but if it were made, then I'd find myself handing over the $5 for some reason or other.
Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the "Omega appears" part of this thought experiment, it may not be as absurd as it seems.
Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie "Yes Man" is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.
I had this sitting in my drafts folder and noticed another long discussion about two-boxing versus one-boxing and realized that the next step in the conversation was similar to the point I was trying to make here.
In the original statement of Newcomb's Paradox, it was stated that Omega is "almost certainly" correct. When did Omega go from being "almost certainly" correct to an arbiter of absolute truth?
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
A mugger will soon come up to me with a gun and make me choose between my life and $5 for his buddy Omega. That's my prediction.
I need to ask: Is this post wrong? Not, is this post stupid or boring or whatever. Is it wrong?
As best as I can tell, there are a handful of objections to the post itself, but there seems to be mostly agreement in its conclusion.
The two main detractors are such:
The basic concept behind Omega is that it is (a) a perfect predictor
I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.
If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.
This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter.
I would recommend skipping ahead in the sequences to http://wiki.lesswrong.com/wiki/Free_will_(solution)
Are you postulating that Omega never lies? You didn't mention this in your post, but without it your problem is trivial.
If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.
Omega doesn't have to simulate people. It just has to know. For example, I know that if Omega says to you "Please accept a million dollars" you'll take it. I didn't have to simulate you or Omega to know that.
I don't think omega is a perfect predictor or benevolent. (edit : or neutral/'not malevolent'. he may well be malevolent, but a million dollars is a million dollars. :-)
Omega doesn't lie and is very powerful and smart. Sometimes he predicts wrongly. He only says something will happen if he is certain in his prediction. If he is at all uncertain, he will say he predicted. (he may also say he predicted when he is certain, as that is true.
"Perfect predictor" leads us somewhat astray. "Bloody good predictor" would be enough (same reason to avoid probabilites 1 and 0, except as a shorthand).
Then if Omega shows up and predicts you will give it $5, and you don't feel like it, then don't. Omega made a mistake - which is possible, as he's only nearly perfect.
Could Omega microwave a burrito so hot, that he himself could not eat it?
and my personal favorite: http://www.smbc-comics.com/index.php?db=comics&id=1778#comic
Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.
A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.
Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.
The fundamental problem behind Omega is how to resolve a claim by a perfect predictor that includes a decision you and you alone are responsible for making. This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter. I care about how you will act. What action will you take? However you label the source of these actions is your prerogative. The question doesn't care how you got there; it cares about the answer.
My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.
A possible objection to the scenario is that the prediction itself is impossible to make. If Omega is a perfect predictor it follows that it would never make an impossible prediction and the prediction "you will give Omega $5" is impossible. This is invalid, however, as long as you can think of at least one scenario where you have a good reason to give Omega $5. Omega would show up in that scenario and ask for $5.
If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.
This question is essentially the same as saying, "If you have a good reason to give Omega $5 then you will give Omega $5." It should be a completely uninteresting, obvious question. It holds some implications on other scenarios involving Omega but those are for another time. Those implications should have no bearing on the answer to this question.