MrHen comments on My Fundamental Question About Omega - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
No, I assumed not malevolent would cover that, but I guess it really doesn't. I added a clause to explicitly point out that Omega isn't lying.
I don't understand this. Breaking it down:
This is where I lose the path. But what? I don't understand where the lie is. If I translate this to real life:
Where did I lie?
I don't see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.
Essentially:
No he isn't, because the simulation is assuming that the statement will be made in the future. Thinking, "Tomorrow, I will say it is Thursday," does not make me a liar today. You can even say, "Tomorrow, I will say it is today," and not be lying because "today" is relative to the "tomorrow" in the thought.
Omega saying, "I predict you will act as such when I tell you I have predicted you will act as such," has no lie.
The simulated Omega says, "I have predicted blah blah blah," when Omega has made no such prediction yet. That's a lie.
Omega doesn't have to simulate people. It just has to know. For example, I know that if Omega says to you "Please accept a million dollars" you'll take it. I didn't have to simulate you or Omega to know that.
No it isn't because the simulated Omega will be saying that after the prediction was made.
When the simulated Omega says "I" it is referring to the Omega that made the prediction.
If Omega runs a simulation for tomorrow that includes it saying, "Today is Thursday," the Omega in the simulation is not lying.
If Omega runs a simulation that includes it saying, "I say GROK. I have said GROK," the simulation is not lying, even if Omega has not yet said GROK. The "I" in "I have said" is referring to the Omega of the future. The one that just said GROK.
If Omega runs a simulation that includes it doing X and then saying, "I have done X." there is no lie.
If Omega runs a simulation that includes it predicting an event and then saying, "I have predicted this event," there is no lie.
Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?
Either way, I don't see a lie.
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven't been closely following your reasoning, so I'm not arguing for or against anything you've written so far -- it's a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega's processor, with input "Omega tells that it predicts X". There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can "simulate" the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don't have to simulate a copy of myself which actually puts the hand in, and so you can't use my prediction to falsify the statement "I never harm myself".
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn't the point of Omega, and has nothing to do with "Omega never lies".
I used the phrase "simulated individual"; it was MrHen who was talking about Omega simulating itself, not me. Shouldn't this reply descend from that comment?
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
This isn't strictly true.
But I agree with the rest of your point.
It's true by hypothesis in my original question. It's possible we're talking about an empty case -- perhaps humans just aren't that complicated.
Yep. I am just trying to make the distinction clear.
Your question relates to prediction via simulation.
My original point makes no assumption about how Omega predicts.
In the above linked comment, EY noted that simulation wasn't strictly required for prediction.
Very clever. The statement "Omega never lies." is apparently much less innocent than it seems. But I don't think there is such a problem with the statement "Omega will not lie to you during the experiment."
I would say no.
Why would you say such a weird thing?
What do you mean?
I'm sorry. :) I mean that it is perfectly obvious to me that in Cyan's thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. ("Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.")
Let me note that I completely agree with the original post, and Cyan's very interesting question does not invalidate your argument at all. It only means that the source of Omega's stated infallibility is not simulate-and-postselect.
The simulated prediction doesn't need to be accurate. Omega just doesn't make the prediction to the real you if it is proven inaccurate for the simulated you.
In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.
It doesn't matter if the prediction is interesting. The prediction is accurate.
This comment is directly addressing the statement:
By "the prediction is not interesting", I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.
Okay. To address this point I need to know what, specifically, you were referring to when you said, "this sort of scenario."
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
Omega isn't using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.
In other words, it doesn't matter why you give Omega $5.
I said this in the original post:
All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.
In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.
In your scenario, the prediction doesn't matter. Remove the prediction, and everything else is exactly the same.
It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.
The specific prediction isn't important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn't appear before you if it didn't expect to get $5.
It's a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn't matter that the reason isn't the prediction itself that is causing you to give Omega $5.
It isn't really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb's problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb's I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb's, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.