Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.

A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.

Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.

Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?

The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.

The fundamental problem behind Omega is how to resolve a claim by a perfect predictor that includes a decision you and you alone are responsible for making. This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter. I care about how you will act. What action will you take? However you label the source of these actions is your prerogative. The question doesn't care how you got there; it cares about the answer.

My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.

A possible objection to the scenario is that the prediction itself is impossible to make. If Omega is a perfect predictor it follows that it would never make an impossible prediction and the prediction "you will give Omega $5" is impossible. This is invalid, however, as long as you can think of at least one scenario where you have a good reason to give Omega $5. Omega would show up in that scenario and ask for $5.

If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.

This question is essentially the same as saying, "If you have a good reason to give Omega $5 then you will give Omega $5." It should be a completely uninteresting, obvious question. It holds some implications on other scenarios involving Omega but those are for another time. Those implications should have no bearing on the answer to this question.

New Comment
153 comments, sorted by Click to highlight new comments since: Today at 4:17 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.

The set-up of the scenario ("Suppose that Omega, etc.") presupposes some things. The question "What do you do?" presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.

Specifically, the question "What do you do" presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. ("You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!")

The presuppositions of "What do you do" lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says "You will now give me $5", and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn't in fact the Omega abo... (read more)

4Morendil14y
Recording an oops: when I wrote the above I didn't really understand Newcomb's Problem. I retract pretty much all of the above comment. I'm now partway through Gary Drescher's Good and Real and glad that it's given me a better handle on Newcomb, and that I can now classify my mistake (in my above description of the "original problem") as "evidentialist".
0MrHen14y
I think I understand your point. A reiteration in my words: The question "What do you do?" implies that the answer is not locked in. If a perfect predictor has made a prediction about what I will do, than the question "What do you do?" is nonsensical. Am I close? EDIT: No, this was not a correct interpretation of Morendil's post. See below. EDIT2: And it has nothing to do with what I think is true.
9Eliezer Yudkowsky14y
If you don't know what the prediction is, it's not nonsensical. You still have to decide what to do. If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you'll hear. For example, if I walk up to someone and say, "I'm good at predicting people in simple problems, I'm truthful, and I predict you'll give me $5," they won't give me anything. Since I know this, I won't make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions. You seem to be confused about free will. Keep reading the Sequences and you won't be.
6MrHen14y
I don't know how to respond to this or Morendil's second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic. Things like this: Confuse me because as far as I can tell, this has nothing to do with free will. I don't care about free will. I care about what happens when a perfect predictor enters the room. Is such a thing just completely impossible? I wouldn't have expected the answer to this to be Yes. Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of "perfect predictor"? What? To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn't have happened, Omega wouldn't predict X. I don't see how including "knowledge of the prediction" into X makes any difference. I don't see how whatever definition of free will you are using makes any difference. "Go read the Sequences" is fair enough, but I wouldn't mind a hint as to what I am supposed to be looking for. "Free will" doesn't satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, "You cannot predict past a free will choice?" As it is right now, I haven't learned anything other than, "You're wrong."
3SilasBarta14y
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you're asking about. Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb's problem, though this discussion started off on a different one). The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here. Also, here's my expanded, modified network to account for a few other things (click to enlarge). ETA: Bolding was irritating, so I've decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.) * Must have nodes corresponding to logical uncertainty (Self-explanatory) * Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected) * Omega's act lies in the past. (Actions after Omega's act are uncorrelated with actions before Omega's act, once you know Omega's act.) * Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.) * We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.) * Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
1MrHen14y
Ah, okay, thanks. I can start reading those, then.
2Jack14y
I think the way you phrased some things in the OP and the fact that you called the post "The Fundamental Problem Behind Omega" has confused a lot of people. Afaict your position is exactly right... but the title suggests a problem. What is that problem?!
2MrHen14y
"Problem" as in "Puzzle" not "Problem" as in "Broken Piece." Would changing the title to Puzzle help?
5Jack14y
So the fundamental puzzle of Omega is: what do you do if he tells you he has predicted you will give him $5? And the answer is, "Whatever you want to do, but you want to give him $5." I guess I'm missing the significance of all this.
0SilasBarta14y
Yes, but it's also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the "you" referred to is a "you" that will give $5. There's no need to think about what you "would" do because that's already known. But likewise, in Newcomb's problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can "decide" right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made. (Then again, it sounds like I'm making the error of fatalism there, but I'm not sure.)
0JGWeissman14y
The problem I see is that then you (together with Omega's prediction about you) becomes something like self-PA.
0MrHen14y
I thought it was obvious, but people are disagreeing with me, so... I don't know what that means.
1Sideways14y
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you. Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. But in this scenario, Omega doesn't have to be a general perfect predictor; it only has to be a perfect predictor of you. From Omega's perspective, after running the simulation, your actions are determined. But you don't have access to Omega's simulation, nor could you understand it even if you did. There's no way for you to know what the results of the computations in your brain will be, without actually running them. If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer's concept of free will. (1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a "well-formed" mind without any such rent-shirking spandrels.
1Morendil14y
Keep in mind that I might be confused about either free will or Newcomb problems. My first comment above isn't really intended as an explanation of Newcomb's original problem, just an explanation of why they elicit feelings of confusion. My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour's book on timeless physics on top of the relevant LW sequences.
0MrHen14y
Okay. That helps, thanks.
0Eliezer Yudkowsky14y
Sounds like you might be having confusion resulting from circular mental causal models. You've got an arrow from Omega to X. Wrong direction. You want to reason, "If X is likely to happen, Omega will predict X."
9Cyan14y
I believe the text you quote is intended to be interpreted as material implication, not causal arrows.
4MrHen14y
Sure. So, X implies that Omega will predict X. The four possible states of the universe: Where X is "You will give Omega $5 if Y happens" and Y is "Omega appears, tells you it predicted X, and asks you for $5": 1) X is true; Omega does Y 2) X is false; Omega does Y 3) X is true; Omega does not do Y 4) X is false; Omega does not do Y Number two will not happen because Omega will not predict X when X is false. Omega doesn't even appear in options 3 and 4, so they aren't relevant. The last remaining option is: X is true; Omega does Y. Filling it out: X is "You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5." Hmm... that is interesting. X includes a reference to X, which isn't a problem in language, but could be a problem with the math. The problem is not as simple as putting "you will give Omega $5" in for X because that isn't strictly what Omega is asking. The easiest simplification is to take out the part about Omega telling you it predicted X... but that is a significant change that I consider it a different puzzle entirely. Is this your objection?
3pengvado14y
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
1MrHen14y
Nice, thanks.
0Morendil14y
If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target. As to the second, it's not really a matter of the question making sense. It's a well-formed English sentence, its meaning is clear, it can be answered, and so on. It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give - in fact the answer I would give - is "I tell Omega to get lost." I would answer as if you'd asked "What do you want to answer", or "What outcome would you prefer, if you were free to disregard the logical constraints on the scenario." Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z. The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You're disguising this question, which has an obvious and correct answer of "1B", as another ("What do you do"). It doesn't matter that 2BD has the highest expected value of all. It doesn't matter that there seems to be a "timing" consideration, in which Omega has "already" chosen the second letter in the string, and youre "choosing" the number "afterwards". The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the "end states" that you can experience. Your "decision" has to be compatible with one of these end states.
0MrHen14y
Why? I don't understand why the answers are disconnected from the scenario. Why isn't all of this included in the concept of a perfect predictor? So... what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn't have asked you in that scenario. It would have been able to predict your response and would have known better than to ask. Hmm. Okay, that makes sense. Are you saying that it doesn't matter for the question, "Which string choice has the highest expected value?" or the question, "What do you do?" My guess is the latter. Okay, but I don't understand how this distinguishes the two questions. If I asked, "What do you do?" what am I asking? Since it's not "Which string scores best?" My impression was that asking, "What do you do?" is asking for a decision between all possible end states. Apparently this was a bad impression?
0Morendil14y
From a standpoint of the psychology of language, when you ask "What do you do", you're asking me to envision a plausible scenario - basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega's prediction, my brain will want to give that answer. When you ask "What do you do", you're talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the "problem"). If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I'm more likely to give the correct answer.
1MrHen14y
Oh. Okay, yeah, I guess I wasn't looking for an answer in terms of "What verbal response do you give to my post?" I was looking for an answer strictly in terms of possible scenarios. Is there a better way to convey that than "What do you do?" Or am I still missing something? Or... ?

If we agree to treat "Omega predicts X" as being equivalent to "X is true", then "Suppose Omega predicts that you'll give it $5" means "Suppose that you'll give Omega $5". Then, the question

Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?

becomes

Suppose that you will give Omega $5. What will you do?

7MrHen14y
I have no problems with that. Do you?

If whenever Omega predicts I will give it $5, I don't give it $5, then I will never observe Omega predicting I will give it $5, which I don't want to happen. Therefore, I don't give the $5. If Omega makes the prediction anyways, this is a problem with Omega, not my decision.

2MrHen14y
If Omega asked you for $5 and promised you $10, would you do it?
5JGWeissman14y
Do you mean that Omega explains that it will give me $10 if and only if I give it $5? Then, yes, I would give it $5. I see where this is going, and you are ignoring the conventional implicit "All else being equal". Do you agree that Omega declaring its prediction is not what causes me to give it $5, and that making such predictions does not make the subject a money pump?
2MrHen14y
Yep. This isn't counterfactual mugging and isn't intended to be. The point in this post will apply to counterfactual mugging, but the information in this post will not turn the subject into a money pump. I didn't ignore it. I made this point in the post: The expected response to this post is, "Well, yeah."

I like this article, but agree the title is off. Perhaps "My Fundamental Question about Omega" or even "Omega: I Just Don't Get It" would be more karma-encouraging. I suspect that at least some people (not me) are taking the current title to mean that you have some sort of new mathematical proof about TDT and then are voting you down in disappointment when they see this. ;-)

[Edit to add, for latecomers: the post I'm replying to was originally titled "The Fundamental Problem Behind Omega"]

5MrHen14y
Ooh, I like that much better. Thanks for the tip.

This question is essentially the same as saying, "If you have a good reason to give Omega $5 then you will give Omega $5."

The statement also seems to be just like, "If Omega has good reason to predict that you will give it $5, you will give it $5."

2MrHen14y
Yes.

Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do?

Maul the prick with a sock filled with 500 pennies.

Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it.

(a) is correct. (b) does not apply, in many cases Omega is a benefactor, but can be used in scenarios where Omega causes a net harm. The important point is that Omega is perfectly honest, the rules of the scenario are exactly what Omega says they are.

5MrHen14y
Omega is not malevolent in that it isn't out to get you. Not malevolent is different than benevolent.
0JGWeissman14y
Sometimes, Omega is malevolent.
5MrHen14y
For the sake of the point in the article, claiming that Omega is not malevolent cleans up annoying, irrelevant questions. Any application of this point would only apply to non-malevolent Omegas, sure, but I am happy with that. Once we deal with the non-malevolent Omegas we can take care of the malevolent ones. In other words, I am not trying to strictly define Omega. I am trying to find a stepping stone to solving non-malevolent Omega problems. The reason I stated it the way I did in the article is because most of the articles using Omega include some such clause. Solving end cases helps solve all cases.
5JGWeissman14y
You are missing the point of Omega, which is to factor out considerations of uncertainty. Omega is a perfect predictor so that we can be certain that its predictions are accurate. Omega is perfectly honest, and explains the rules of the scenario, so that we can be certain of the rules. We don't have to worry about Omega's motivations at all, because, in a proper Omega scenario, Omega's actions in repsonse to every possible state of the scenario is exactly specified.
2MrHen14y
Right. I used the term "not malevolent" for this. What term would you have used?
3JGWeissman14y
"Has exactly specified behavior" would work.
0MrHen14y
Sure, that works. How about, "(b) has explicitly defined behavior." Does that translate okay?

This may be too trivial for here, but I just watched a Derren Brown show on Channel 4. I think it's very likely that he could do a stage show in which he plays the part of Omega and consistently guesses correctly, and if that were to happen, I'd love to know whether those who one-box or two-box when faced with Omega would make the same decision when faced with Derren Brown. I would one-box.

  • F = Factors that feed into your decision process.
  • OP = Omega's prediction.
  • YD = Your decision.

  • F --> OP

  • F --> YD

Your decision does not bootstrap itself out of nothing; it is a function of F. All causality here is forwards in time. By the definition of Omega, OP and YD always match, and the causality chain is self-consistent, for a single timeline. Most confusion that I have seen around Omega or Newcomb seems to be confusion about at least one of these things.

3MrHen14y
Yeah, I agree with that. The catch is that Omega isn't going to show up if it predicts you aren't going to pay. If it showed up, than it must have predicted you are going to pay.
0roland14y
Ooops, as soon as Omega tells you his prediction the above has to change because now there is a new element in F.
2AndyWood14y
I think this is the same self-referential problem Mr. Hen calls out in this comment. I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega. I can't tell whether this makes Omega logically impossible or not. Anyone?
2byrnema14y
This doesn't make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
0whpearson14y
I don't think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not. I'll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.
0Sly14y
I would assume that Omega telling you his prediction was already factored into the Omega Prediction F.

My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.

I agree with that. I don't expect a perfect predictor to make that prediction, though, but if it were made, then I'd find myself handing over the $5 for some reason or other.

2Stuart_Armstrong14y
Yes, you would expect that you would find yourself handing over the money. If told "Omega will soon predict that you will give him $5", then you divide the universe into two categories - I will give over $5, or I won't - and assign much greater probability to the first option. But that is not a reason to give him $5 if you otherwise wouldn't. It's a reason to expect that there will compelling reasons to make you do it - but if these compelling reasons don't materialise, there is no reason for you to act as if they were there.
0CronoDAS14y
Yes, I agree with this.

Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the "Omega appears" part of this thought experiment, it may not be as absurd as it seems.

Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie "Yes Man" is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.

I had this sitting in my drafts folder and noticed another long discussion about two-boxing versus one-boxing and realized that the next step in the conversation was similar to the point I was trying to make here.

-5Eliezer Yudkowsky14y

In the original statement of Newcomb's Paradox, it was stated that Omega is "almost certainly" correct. When did Omega go from being "almost certainly" correct to an arbiter of absolute truth?

3Morendil14y
I think that's more of a simplifying assumption. I've seen statements of the puzzle with varying degrees of certainty in Omega's predictions (total, "almost certain", 99%, etc.). I'm pretty sure you could use, instead of Omega, a human psychologist with a 90% track record in predicting two-boxers (and predicting if you'll use a coinflip just to tick her off). The expected value of two-boxing vs one-boxing then requires a more sophisticated calculation. But I don't think that changes the structure of the puzzle.
4Kevin14y
I think it's a good simplifying assumption, but I wonder how much of the confusion that results in philosophers deciding to one box is not understanding what a perfect predictor is. Are there any defenses of one boxing from people that believe Omega is a perfect predictor?
0Stuart_Armstrong14y
It changes the structure tremendously. A world in which Omega predicts you will give it $5 and you don't, suddenly has a non-zero possibility. If Omega is perfect, you may as well hand over the $5 right now. If he isn't, you still know that most likely you will give over the $5, but you might as well wait around to see why. And the decision "I will not hand over $5" is no longer inconsistent.
2DanielVarga14y
That feels just like being mugged. I KNOW that eventually I will give Omega $5, but I prefer it not to happen by some unforeseeable process that may cause irreparable damage to me, like epileptic seizure or lightning strike. So I just hand over the cash. By the way, this reasoning applies regardless of Omega's accuracy level.
7Eliezer Yudkowsky14y
Then you're much more likely to be told this by Omega in the first place, for no better reason than that you were frightened enough to hand over the cash.
0DanielVarga14y
What do you mean by the likelihood of Omega saying something? You condition on something different from what I condition on, but I don't understand what it is. Anyway, what I wrote stands even if we explicitly state that Omega does not say anything except "I am Omega. You will soon give me 5 dollars."
1tut14y
He conditions on your response. It is like a simplified version of Newcombe's paradox. You choose a decision theory, then Omega tells you to give him $5 iff your decision theory is such that you will give him $5 upon being told that. If you think the way you talked in the grandparent, then you will pay up.
0DanielVarga14y
tut, that's correct, and I don't feel bad about your conclusion at all. We have no disagreement, although I think your terminology obscures the fact that "my chosen decision theory" can in fact be a sudden, unforeseen brain hemorrhage during my conversation with Omega. So let me simply ask: If Omega appeared right now, and said "I am Omega. You will give me 5 dollars in one minute.", what would you actually do during that minute? (Please don't answer that this is impossible because of your chosen decision theory. You can't know your own decision theory.) Of course you can't predict any of the strange or not so strange things that could happen to you during the time, all perfectly transparent to Omega. But that's not what I'm asking. I'm asking about your current plan.
1tut14y
I would try to get Omega to teach me psychology. Or just ask questions. I would not give him anything if he would not answer.
1DanielVarga14y
All right, you are committed. :) At least admit that you would be frightened in the last five seconds of the minute. Does it change anything if Omega tells you in advance that it will not help you with any sort of information or goods?
0tut14y
I can only think about omega in far mode, so I can not predict that accurately. But I feel that I would be more curious than anything else
0Stuart_Armstrong14y
Good point. That's a terrifying thought - and may be enough to get me to hand over the cash right away. I might put the cash in one of twenty black boxes, and hand one of them over to Omega at random.
0[anonymous]14y
It shouldn't feel like being mugged. All that making Omega perfect predictor does is prevent it from bugging you if you are not willing to pay $5. It means Omega will ask less not that you will pay more.
1DanielVarga14y
Your analysis is one-sided. Please try to imagine the situation with a one minute time limit. Omega appears, and tells you that you will give it 5 dollars in one minute. You decide that you will not give it the money. You are very determined about this, maybe because you are curious about what will happen. The clock is ticking... The less seconds are there left from the minute, the more worried you should objectively be, because eventually you WILL hand over the money, and the less seconds are there, the more disruptive the change will be that will eventually cause you to reconsider. Note that Omega didn't give any promises about being safe during the one minute. If you think that e.g. causing you brain damage would be unfair of Omega, then we are already in the territory of ethics, not decision theory. Maybe it wasn't Omega that caused the brain damage, maybe it appeared before you exactly because it predicted that it will happen to you. With Omegas, it is not always possible to disentangle cause and effect.
0MrHen14y
Whoop, sorry, I deleted the comment before you replied. Let us assume that you will never, under any circumstances hand over $5 unless you feel good and happy and marvelous about it. Omega can easily pick a circumstance where you feel good, happy, marvelous about handing it $5. In this scenario, by definition, you will not feel mugged. On the other hand, let us assume that you can be bullied into handing over $5 by Omega appearing and demanding $5 in one minute. If this works, which we are assuming it does, Omega can appear and get its $5. You will like you were just mugged, but the only way this can happen is if you are the sort of person that will actually hand over $5 without understanding why. Omega is a "jerk" in the sense that it made you feel like you were being mugged but this doesn't imply anything about the scenario or Omega. It implies something about the situations in which you would hand Omega $5. (And that Omega doesn't care about being a jerk.) The point is this: If you made a steadfast decision to never hand Omega $5 without feeling happy about it, Omega would never ask you for $5 without making you feel happy about it. If you decide to never, ever hand over $5 while feel happy about it, than you will never see a non-mugging scenario. Note: This principle is totally limited to the scenario discussed in the OP. This has no bearing on Newcomb's or Counterfactual Mugging or anything else.
0MrHen14y
This is true but it doesn't change how frequently you would give Omega $5. It changes Omega's success rate, but only in the sense that it won't play the game if you aren't willing to give $5. If A = You pay Omega $5 and O = Omega asks for $5: p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + p(O|~A) * p(~A)) Making Omega a perfect predictor sets p(Omega asks|You don't pay) to 0, so p(O|~A) = 0. p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + 0 * p(~A)) p(A|O) = p(O|A) * p(A) / p(O|A) * p(A) p(A|O) = 1 Therefore, p(You pay Omega $5|Omega asks for $5) is 1. If Omega asks, you will pay. Big whoop. This is a restriction on Omega asking, not on you giving.
1Stuart_Armstrong14y
Yes, but consider what happens when you start conditioning on the statement B="I do not intend to give Omega $5". If Omega is perfect, this is irrelevant; you will hand over the cash. If Omega is not perfect, then the situation changes. Use A and O as above; then a relvant question is: how many of Omega's errors have B (nearly all of them) versus how many of Omega's successes have B (nearly none of them). Basically, you're trying to estimate the relative sizes of (B&A)|O versus (B&~A)|O. Now A|O is very large while ~A|O is very small, but (B&A)|O is tiny in A|O while (B&~A)|O makes up most of ~A|O. So I'd crudely estimate that those two sets are generally of pretty comparable size. If Omega is only wrong one in a million, I'd estimate I'd have even odds of handing him the $5 if I didn't want to.
1MrHen14y
Right, when Omega is perfect, this isn't really a useful distinction. The correlation between B and A is irrelevant for the odds of p(A|O). It does get more interesting when asking: p(A|B) p(~A|B) p(O|B) These are still interesting even when Omega is perfect. If, as you suggest, we look at the relationship between A, B, and O when Omega isn't perfect, your questions are dead on in terms of what matters.

Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?

A mugger will soon come up to me with a gun and make me choose between my life and $5 for his buddy Omega. That's my prediction.

I need to ask: Is this post wrong? Not, is this post stupid or boring or whatever. Is it wrong?

As best as I can tell, there are a handful of objections to the post itself, but there seems to be mostly agreement in its conclusion.

The two main detractors are such:

  • Morendil, who seems to be saying that the question, "What do you do?" will "reliably induce answers which are answers to something different from the scenario as posed." Namely, the answer given to that question will be the same as if I had asked "What do you want to answe
... (read more)

The basic concept behind Omega is that it is (a) a perfect predictor

I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.

If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.

1byrnema14y
I voted this comment down, and would like to explain why. Right, we don't want people distracted by whether Omega's prediction could be incorrect in their case or whether the solution should involve tricking Omega, etc. We say that Omega is a perfect predictor not because it's so very reasonable for him to be a perfect predictor, but so that people won't get distracted in those directions. We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability? Rather it's about whether logic (2-boxing seems logical) and winning are at odds. Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing --in this problem -- about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
2Dan_Moore14y
My difficulty is in understanding why the concept of a perfect predictor is relevant to artificial intelligence. Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb's problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
3Cyan14y
You don't need perfect prediction to develop an argument for one-boxing. If the predictor's probability of correct prediction is p and the utility of the contents of the one-box is k times the utility of the contents of the two-box, then the expected utility of one-boxing is greater than that of two-boxing if p is greater than (k + 1) / 2k.
0thomblake14y
I agree that in general this is how it works. It's rather like POAT that way... some people see it as one kind of problem, and other people see it as another kind of problem, and neither side can make sense of the other's position.
2Cyan14y
I've heard this sentiment expressed a fair bit, but I think I understand the argument for two-boxing perfectly, even though I'd one-box.
0Morendil14y
POAT?
0Cyan14y
Plane on a treadmill. (I'd pull out LMGTFY again, but I try to limit myself to one jerk-move per day.)
5Paul Crowley14y
Er, did you actually Google it before saying that? For me it's not even defined that way on the front page.
0Cyan14y
Yep. For me the first link (at work, second link now at home) is urbandictionary.com, and it's the second definition.
0Paul Crowley14y
I don't think it counts as a matter for LMGTFY unless the answer pretty much screams at you on the results page before you even start clicking the links...
0Cyan14y
I personally ask for a link if two minutes of Googling and link-clicking gets me nothing; my standard for LMGTFY follows as a corollary.
4Morendil14y
Making the assumption that the person you're responding to hasn't invested those two minutes can be risky, as the present instance shows. Maybe they have, but got different results. Another risky assumption is that the other person is using the same Google that you are using. By default the search bar in Firefox directs me to the French Google (I've even looked for a way to change that, without success). So you could end up looking like an ass, rather than a jerk, when you pull a LMGTFY and the recipient still doesn't see what you're seeing. It only works as a status move if you're confident that most search options and variations will still pull up the relevant result. More importantly, this is yet another data point in favor of the 10x norm. Unless of course we want LW to be Yet Another Internet Forum (complete with avatars). (ETA: yes, in the comment linked here the 10X norm is intended to apply to posts, not comments. I favor the stronger version that applies to comments as well: look at the length of this comment thread, infer the time spent writing these various messages, the time wasted by readers watching Recent Comments, and compare with how long it would have taken to spell it out.)
0Cyan14y
'Strue. Those occurred to me about five minutes after I first replied to ciphergoth, when the implications of the fact that the link position changed based on where I was when I Googled finally penetrated my cerebral cortex. I considered noting it in an ETA, but I didn't expect the comment thread to continue as far as it has.
0Paul Crowley14y
Oh, note also that Cyan's first use of LMGTFY was I think legit - finding my blog through Google is pretty straightforward from my username.
0Paul Crowley14y
I don't think it's fair to count the meta-discussion against Cyan when weighing this up. Anything can spark meta-discussion here.
1Paul Crowley14y
If it takes two full minutes for my readership to find out what the terms mean, the onus is on me to link to it; if that only takes me three minutes and it saves two readers Googling, then it's worth it. The LMGTFY boundary is closer to ten seconds or less. Another option would have been to spell it out - that way a lot of readers would have known without Googling, and those who didn't would have got answers right away.
0Cyan14y
I don't disagree with this. My "corollary" comment above was too facile -- when I recall my own behavior, it's my standard for peevishly thinking LMGTFY, not actually linking it.
1dv82matt14y
First, thanks for explaining your down vote and thereby giving me an opportunity to respond. The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box. It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes. It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out. I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases. My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
0byrnema14y
I see we really are talking about different Newcomb "problem"s. I took back my down vote. So one of our problems should have another name, or at least a qualifier. I don't think Newcomb's problem (mine) is so trivial. And I wouldn't call belief in the triangle inequality a bias. The contents of box 1 = (a>=0) The contents of box 2 = (b>=0) 2-boxing is the logical deduction that ((a+b)>=a) and ((a+b)>=b). I do 1-box, and do agree that this decision is a logical deduction. I find it odd though that this deduction works by repressing another logical deduction and don't think I've ever see this before. I would want to argue that any and every logical path should work without contradiction.
0MrHen14y
Perhaps I can clarify: I specifically intended to simplify the dilemma to the point where it was trivial. There are a few reasons for this, but the primary reason is so I can take the trivial example expressed here, tweak it, and see what happens. This is not intended to be a solution to any other scenario in which Omega is involved. It is intended to make sure that we all agree that this is correct.
0dv82matt14y
I’m finding "correct" to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial. To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.
0MrHen14y
I don't care about Newcomb's problem. This post doesn't care about Newcomb's problem. The next step in this line of questioning still doesn't care about Newcomb's problem. So, please, forget about Newcomb's problem. At some point, way down the line, Newcomb's problem may show up again, but when it does this: Will certainly be taken into account. Namely, it is exactly because the difference is not trivial that I went looking for a trivial example. The reason you find "correct" to be loaded is probably because you are expecting some hidden "Gotcha!" to pop out. There is no gotcha. I am not trying to trick you. I just want an answer to what I thought was a simple question.
0[anonymous]14y
First, thanks for explaining your down vote and thereby giving me an opportunity to respond. The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box. It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes. It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out. I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases. My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
-1Dan_Moore14y
I agree. A perfect predictor is either Laplace’s Demon or a supernatural being. I don’t see why either concept is particularly useful for a rationalist.

This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter.

I would recommend skipping ahead in the sequences to http://wiki.lesswrong.com/wiki/Free_will_(solution)

3MrHen14y
The wiki tells me I should try to solve the problem on my own. I assume that this is a serious request, so I will not read through that sequence yet.
0Kevin14y
Then you can read the set-up post, at least. http://lesswrong.com/lw/of/dissolving_the_question/
4Jonathan_Graehl14y
Omega obviously knows this about you.

Are you postulating that Omega never lies? You didn't mention this in your post, but without it your problem is trivial.

If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.

0MrHen14y
No, I assumed not malevolent would cover that, but I guess it really doesn't. I added a clause to explicitly point out that Omega isn't lying. I don't understand this. Breaking it down: * Omega predicts I will give it $5 * Omega appears and tells me it predicted I will give it $5 * Telling me about the prediction implies that the telling was part of the original prediction * If the telling was part of the original prediction, than it was part of a simulation of future events * The simulation involves Omega telling me but... This is where I lose the path. But what? I don't understand where the lie is. If I translate this to real life: * I predict Sally will give me $5 * I walk up to Sally and tell her I predict she will give me $5 * I then explain that she owes me $5 and she already told me she would give me the $5 today * Sally gives me $5 and calls me weird Where did I lie? * Omega predicts I will give it $5 * Omega appears and tells me it predicted I will give it $5 * Omega tells me why I will give it $5 * I give Omega $5 I don't see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction. Essentially: No he isn't, because the simulation is assuming that the statement will be made in the future. Thinking, "Tomorrow, I will say it is Thursday," does not make me a liar today. You can even say, "Tomorrow, I will say it is today," and not be lying because "today" is relative to the "tomorrow" in the thought. Omega saying, "I predict you will act as such when I tell you I have predicted you will act as such," has no lie.
0Peter_de_Blanc14y
The simulated Omega says, "I have predicted blah blah blah," when Omega has made no such prediction yet. That's a lie.

Omega doesn't have to simulate people. It just has to know. For example, I know that if Omega says to you "Please accept a million dollars" you'll take it. I didn't have to simulate you or Omega to know that.

-1MrHen14y
No it isn't because the simulated Omega will be saying that after the prediction was made. When the simulated Omega says "I" it is referring to the Omega that made the prediction. If Omega runs a simulation for tomorrow that includes it saying, "Today is Thursday," the Omega in the simulation is not lying. If Omega runs a simulation that includes it saying, "I say GROK. I have said GROK," the simulation is not lying, even if Omega has not yet said GROK. The "I" in "I have said" is referring to the Omega of the future. The one that just said GROK. If Omega runs a simulation that includes it doing X and then saying, "I have done X." there is no lie. If Omega runs a simulation that includes it predicting an event and then saying, "I have predicted this event," there is no lie.
0Peter_de_Blanc14y
Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?
1MrHen14y
Either way, I don't see a lie.
1Cyan14y
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven't been closely following your reasoning, so I'm not arguing for or against anything you've written so far -- it's a genuine inquiry, not rhetoric.)
1prase14y
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega's processor, with input "Omega tells that it predicts X". There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario. In an analogy, I can "simulate" the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don't have to simulate a copy of myself which actually puts the hand in, and so you can't use my prediction to falsify the statement "I never harm myself". Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn't the point of Omega, and has nothing to do with "Omega never lies".
1Cyan14y
I used the phrase "simulated individual"; it was MrHen who was talking about Omega simulating itself, not me. Shouldn't this reply descend from that comment?
0prase14y
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
0MrHen14y
This isn't strictly true. But I agree with the rest of your point.
0Cyan14y
It's true by hypothesis in my original question. It's possible we're talking about an empty case -- perhaps humans just aren't that complicated.
0MrHen14y
Yep. I am just trying to make the distinction clear. Your question relates to prediction via simulation. My original point makes no assumption about how Omega predicts. In the above linked comment, EY noted that simulation wasn't strictly required for prediction.
0Cyan14y
We are in violent agreement.
0DanielVarga14y
Very clever. The statement "Omega never lies." is apparently much less innocent than it seems. But I don't think there is such a problem with the statement "Omega will not lie to you during the experiment."
0MrHen14y
I would say no.
0DanielVarga14y
Why would you say such a weird thing?
0MrHen14y
What do you mean?
0DanielVarga14y
I'm sorry. :) I mean that it is perfectly obvious to me that in Cyan's thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise? Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. ("Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.") Let me note that I completely agree with the original post, and Cyan's very interesting question does not invalidate your argument at all. It only means that the source of Omega's stated infallibility is not simulate-and-postselect.
0MrHen14y
I didn't see Cyan's question as offering any particular position so I didn't feel obligated to give a reason more thorough than what I wrote elsewhere in the thread. Omega isn't assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn't mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won't actually lie. This doesn't mean that it cannot simulate what would happen if it did lie.
0DanielVarga14y
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan's thought experiment she has a consciousness and all.) We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world. The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan's thought experiment: That "Omega never lies." is harder to formalize than it appears. The second is also perfectly viable, but it will be extremely unpopular here at LW.
0MrHen14y
Perhaps I am not fully understanding what you mean by simulation. If I create a simulation, what does this mean?
0Cyan14y
In this context, something along the lines of whole brain emulation.
0tut14y
The simulated prediction doesn't need to be accurate. Omega just doesn't make the prediction to the real you if it is proven inaccurate for the simulated you.
-1JGWeissman14y
In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.
2MrHen14y
It doesn't matter if the prediction is interesting. The prediction is accurate. This comment is directly addressing the statement:
0JGWeissman14y
By "the prediction is not interesting", I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.
0MrHen14y
Okay. To address this point I need to know what, specifically, you were referring to when you said, "this sort of scenario."
0JGWeissman14y
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
1MrHen14y
Omega isn't using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing. In other words, it doesn't matter why you give Omega $5. I said this in the original post: All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.
0JGWeissman14y
In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart. In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same. It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.
2MrHen14y
The specific prediction isn't important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega. Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn't appear before you if it didn't expect to get $5. It's a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted. In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn't matter that the reason isn't the prediction itself that is causing you to give Omega $5. It isn't really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at. People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb's problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb's I would bump into the claim presented in this post and realize that people were going to object. So, instead of talking about this claim inside of a post on Newcomb's, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.

I don't think omega is a perfect predictor or benevolent. (edit : or neutral/'not malevolent'. he may well be malevolent, but a million dollars is a million dollars. :-)

Omega doesn't lie and is very powerful and smart. Sometimes he predicts wrongly. He only says something will happen if he is certain in his prediction. If he is at all uncertain, he will say he predicted. (he may also say he predicted when he is certain, as that is true.

"Perfect predictor" leads us somewhat astray. "Bloody good predictor" would be enough (same reason to avoid probabilites 1 and 0, except as a shorthand).

Then if Omega shows up and predicts you will give it $5, and you don't feel like it, then don't. Omega made a mistake - which is possible, as he's only nearly perfect.

Could Omega microwave a burrito so hot, that he himself could not eat it?

and my personal favorite: http://www.smbc-comics.com/index.php?db=comics&id=1778#comic

[+][anonymous]14y-50