JGWeissman comments on My Fundamental Question About Omega - Less Wrong

6 Post author: MrHen 10 February 2010 05:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrHen 10 February 2010 06:34:10PM 0 points [-]

No, I assumed not malevolent would cover that, but I guess it really doesn't. I added a clause to explicitly point out that Omega isn't lying.

If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.

I don't understand this. Breaking it down:

  • Omega predicts I will give it $5
  • Omega appears and tells me it predicted I will give it $5
  • Telling me about the prediction implies that the telling was part of the original prediction
  • If the telling was part of the original prediction, than it was part of a simulation of future events
  • The simulation involves Omega telling me but...

This is where I lose the path. But what? I don't understand where the lie is. If I translate this to real life:

  • I predict Sally will give me $5
  • I walk up to Sally and tell her I predict she will give me $5
  • I then explain that she owes me $5 and she already told me she would give me the $5 today
  • Sally gives me $5 and calls me weird

Where did I lie?

  • Omega predicts I will give it $5
  • Omega appears and tells me it predicted I will give it $5
  • Omega tells me why I will give it $5
  • I give Omega $5

I don't see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.

Essentially:

But since it runs this simulation before making the prediction, Omega is lying in the simulation.

No he isn't, because the simulation is assuming that the statement will be made in the future. Thinking, "Tomorrow, I will say it is Thursday," does not make me a liar today. You can even say, "Tomorrow, I will say it is today," and not be lying because "today" is relative to the "tomorrow" in the thought.

Omega saying, "I predict you will act as such when I tell you I have predicted you will act as such," has no lie.

Comment author: JGWeissman 10 February 2010 06:45:55PM -1 points [-]

In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.

Comment author: MrHen 10 February 2010 06:51:59PM 2 points [-]

It doesn't matter if the prediction is interesting. The prediction is accurate.

This comment is directly addressing the statement:

But since it runs this simulation before making the prediction, Omega is lying in the simulation.

Comment author: JGWeissman 10 February 2010 06:59:32PM 0 points [-]

By "the prediction is not interesting", I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.

Comment author: MrHen 10 February 2010 07:01:12PM 0 points [-]

Okay. To address this point I need to know what, specifically, you were referring to when you said, "this sort of scenario."

Comment author: JGWeissman 10 February 2010 07:07:05PM 0 points [-]

I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.

Comment author: MrHen 10 February 2010 07:16:12PM 1 point [-]

I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.

Omega isn't using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.

In other words, it doesn't matter why you give Omega $5.

I said this in the original post:

If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.

All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.

Comment author: JGWeissman 10 February 2010 07:42:12PM 0 points [-]

All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.

In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.

In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same.

It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.

Comment author: MrHen 10 February 2010 08:08:27PM 2 points [-]

In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.

In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same.

The specific prediction isn't important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.

Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn't appear before you if it didn't expect to get $5.

It's a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.

In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn't matter that the reason isn't the prediction itself that is causing you to give Omega $5.

It is therefore absurd that you think your scenario says something about the other because they all involve predictions.

It isn't really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.

People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb's problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb's I would bump into the claim presented in this post and realize that people were going to object.

So, instead of talking about this claim inside of a post on Newcomb's, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.