Peter_de_Blanc comments on My Fundamental Question About Omega - Less Wrong

6 Post author: MrHen 10 February 2010 05:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread.

Comment author: Peter_de_Blanc 10 February 2010 06:08:27PM 0 points [-]

Are you postulating that Omega never lies? You didn't mention this in your post, but without it your problem is trivial.

If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.

Comment author: MrHen 10 February 2010 06:34:10PM 0 points [-]

No, I assumed not malevolent would cover that, but I guess it really doesn't. I added a clause to explicitly point out that Omega isn't lying.

If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.

I don't understand this. Breaking it down:

  • Omega predicts I will give it $5
  • Omega appears and tells me it predicted I will give it $5
  • Telling me about the prediction implies that the telling was part of the original prediction
  • If the telling was part of the original prediction, than it was part of a simulation of future events
  • The simulation involves Omega telling me but...

This is where I lose the path. But what? I don't understand where the lie is. If I translate this to real life:

  • I predict Sally will give me $5
  • I walk up to Sally and tell her I predict she will give me $5
  • I then explain that she owes me $5 and she already told me she would give me the $5 today
  • Sally gives me $5 and calls me weird

Where did I lie?

  • Omega predicts I will give it $5
  • Omega appears and tells me it predicted I will give it $5
  • Omega tells me why I will give it $5
  • I give Omega $5

I don't see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.

Essentially:

But since it runs this simulation before making the prediction, Omega is lying in the simulation.

No he isn't, because the simulation is assuming that the statement will be made in the future. Thinking, "Tomorrow, I will say it is Thursday," does not make me a liar today. You can even say, "Tomorrow, I will say it is today," and not be lying because "today" is relative to the "tomorrow" in the thought.

Omega saying, "I predict you will act as such when I tell you I have predicted you will act as such," has no lie.

Comment author: Peter_de_Blanc 10 February 2010 06:40:42PM 0 points [-]

The simulated Omega says, "I have predicted blah blah blah," when Omega has made no such prediction yet. That's a lie.

Comment author: Eliezer_Yudkowsky 10 February 2010 07:52:53PM 10 points [-]

Omega doesn't have to simulate people. It just has to know. For example, I know that if Omega says to you "Please accept a million dollars" you'll take it. I didn't have to simulate you or Omega to know that.

Comment author: MrHen 10 February 2010 06:49:40PM 0 points [-]

No it isn't because the simulated Omega will be saying that after the prediction was made.

When the simulated Omega says "I" it is referring to the Omega that made the prediction.

If Omega runs a simulation for tomorrow that includes it saying, "Today is Thursday," the Omega in the simulation is not lying.

If Omega runs a simulation that includes it saying, "I say GROK. I have said GROK," the simulation is not lying, even if Omega has not yet said GROK. The "I" in "I have said" is referring to the Omega of the future. The one that just said GROK.

If Omega runs a simulation that includes it doing X and then saying, "I have done X." there is no lie.

If Omega runs a simulation that includes it predicting an event and then saying, "I have predicted this event," there is no lie.

Comment author: Peter_de_Blanc 10 February 2010 08:26:14PM 0 points [-]

If Omega runs a simulation that includes it predicting an event and then saying, "I have predicted this event," there is no lie.

Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?

Comment author: MrHen 10 February 2010 08:54:01PM 1 point [-]

Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?

Either way, I don't see a lie.

Comment author: Cyan 10 February 2010 08:56:40PM *  1 point [-]

If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven't been closely following your reasoning, so I'm not arguing for or against anything you've written so far -- it's a genuine inquiry, not rhetoric.)

Comment author: prase 12 February 2010 01:37:47PM *  1 point [-]

Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega's processor, with input "Omega tells that it predicts X". There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.

In an analogy, I can "simulate" the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don't have to simulate a copy of myself which actually puts the hand in, and so you can't use my prediction to falsify the statement "I never harm myself".

Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn't the point of Omega, and has nothing to do with "Omega never lies".

Comment author: Cyan 12 February 2010 01:47:00PM 1 point [-]

I used the phrase "simulated individual"; it was MrHen who was talking about Omega simulating itself, not me. Shouldn't this reply descend from that comment?

Comment author: prase 15 February 2010 09:11:54AM *  0 points [-]

Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.

Comment author: MrHen 12 February 2010 02:57:17PM 0 points [-]

Omega has to make prediction of your behaviour, so it has to simulate you, not itself.

This isn't strictly true.

But I agree with the rest of your point.

Comment author: Cyan 12 February 2010 05:08:07PM 0 points [-]

It's true by hypothesis in my original question. It's possible we're talking about an empty case -- perhaps humans just aren't that complicated.

Comment author: DanielVarga 11 February 2010 09:43:13PM 0 points [-]

Very clever. The statement "Omega never lies." is apparently much less innocent than it seems. But I don't think there is such a problem with the statement "Omega will not lie to you during the experiment."

Comment author: MrHen 10 February 2010 09:15:30PM 0 points [-]

I would say no.

Comment author: DanielVarga 11 February 2010 09:43:36PM 0 points [-]

Why would you say such a weird thing?

Comment author: MrHen 11 February 2010 09:51:00PM 0 points [-]

What do you mean?

Comment author: tut 10 February 2010 08:35:00PM 0 points [-]

The simulated prediction doesn't need to be accurate. Omega just doesn't make the prediction to the real you if it is proven inaccurate for the simulated you.

Comment author: JGWeissman 10 February 2010 06:45:55PM -1 points [-]

In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.

Comment author: MrHen 10 February 2010 06:51:59PM 2 points [-]

It doesn't matter if the prediction is interesting. The prediction is accurate.

This comment is directly addressing the statement:

But since it runs this simulation before making the prediction, Omega is lying in the simulation.

Comment author: JGWeissman 10 February 2010 06:59:32PM 0 points [-]

By "the prediction is not interesting", I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.

Comment author: MrHen 10 February 2010 07:01:12PM 0 points [-]

Okay. To address this point I need to know what, specifically, you were referring to when you said, "this sort of scenario."

Comment author: JGWeissman 10 February 2010 07:07:05PM 0 points [-]

I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.

Comment author: MrHen 10 February 2010 07:16:12PM 1 point [-]

I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.

Omega isn't using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.

In other words, it doesn't matter why you give Omega $5.

I said this in the original post:

If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.

All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.

Comment author: JGWeissman 10 February 2010 07:42:12PM 0 points [-]

All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.

In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.

In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same.

It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.

Comment author: MrHen 10 February 2010 08:08:27PM 2 points [-]

In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.

In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same.

The specific prediction isn't important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.

Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn't appear before you if it didn't expect to get $5.

It's a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.

In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn't matter that the reason isn't the prediction itself that is causing you to give Omega $5.

It is therefore absurd that you think your scenario says something about the other because they all involve predictions.

It isn't really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.

People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb's problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb's I would bump into the claim presented in this post and realize that people were going to object.

So, instead of talking about this claim inside of a post on Newcomb's, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.