Vladimir_Nesov comments on Formalizing Newcomb's - Less Wrong

18 Post author: cousin_it 05 April 2009 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 05 April 2009 09:22:26PM 0 points [-]

The usual setting, you being a sufficiently simple mere human, not building your own Omegas in the process, going through the procedure in a controlled environment if that helps to get the case stronger, and Omega being able to predict your actual final decision, by whatever means it pleases. What the Omega does to predict your decision doesn't affect you, shouldn't concern you, it looks like only that it's usually right is relevant.

Comment author: byrnema 05 April 2009 09:53:33PM *  2 points [-]

"What the Omega does to predict your decision doesn't affect you, shouldn't concern you, it looks like only that it's usually right is relevant."

Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I'm willing to allow that in a LCW, I'm not given enough information. Is this the Newcomb "problem", then -- how to make rational decision when you're not given enough information?

Comment author: cousin_it 05 April 2009 09:31:52PM *  0 points [-]

No perfectly rational decision theory can be applied in this case, just like you can't play chess perfectly rationally with a desktop PC. Several comments above I outlined a good approximation that I would use and recommend a computer to use. This case is just... uninteresting. It doesn't raise any question marks in my mind. It should?

Comment author: byrnema 05 April 2009 09:55:00PM 2 points [-]

Can you please explain why a rational decision theory cannot be applied?

Comment author: cousin_it 05 April 2009 10:33:01PM *  0 points [-]

As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we're up against, the calculations are a little easier, e.g. in the "simulating Omega" case we just one-box without thinking.

Comment author: Eliezer_Yudkowsky 06 April 2009 11:55:16AM 2 points [-]

By that definition of "perfect rationality" no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.

Comment author: cousin_it 06 April 2009 01:31:34PM *  0 points [-]

Yes, it's true. Perfectly playing any non-mathematical "real world" game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.

Comment author: Vladimir_Nesov 06 April 2009 02:24:00PM 0 points [-]

The decision theory must allow approximations, a ranking allowing to find (recognize) as good a solution as possible, given the practical limitations.

Comment author: cousin_it 06 April 2009 02:36:06PM *  0 points [-]

You are reasoning from the faulty assumption that "surely it's possible to formalize the problem somehow and do something". The problem statement is self-contradictory. We need to resolve the contradiction. It's only possible by making some part of the problem statement false. That's what the prior over Omegas is for. We've been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 "the money is already there anyway" has become a plain simple lie, and in case 2 "Omega has already predicted your choice" becomes a lie when you're inside Omega. I say the real world doesn't have contradictions. Don't ask me to reason approximately from contradictory assumptions.

Comment author: Vladimir_Nesov 06 April 2009 02:48:15PM 0 points [-]

You gotta decide something, faced with the situation. It doesn't look like you argue that Newcomb's test itself literally can't be set up. So what do you mean by contradictions? The physical system itself can't be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.

Comment author: cousin_it 06 April 2009 04:02:24PM *  0 points [-]

The physical system can't be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.

Comment author: Vladimir_Nesov 06 April 2009 04:40:29PM 0 points [-]

Omega doesn't lie by the statement of the problem. It doesn't even assert anything, it just places the money in the box or doesn't.

Comment author: cousin_it 06 April 2009 04:42:18PM *  0 points [-]

What's wrong with you? If Omega tells us the conditions of the experiment (about "foretelling" and stuff), then Omega is lying. If someone else, then someone else. Let's wrap this up, I'm sick.

Comment author: Vladimir_Nesov 05 April 2009 10:07:12PM *  0 points [-]

The problem setting itself shouldn't raise many questions. If you agree that the right answer in this setting is to one-box, you probably understand the test. Next, look at the popular decision theories that calculate that the "correct" answer is to two-box. Find what's wrong with those theories, or with the ways of applying them, and find a way to generalize them to handle this case and other cases correctly.

Comment author: cousin_it 05 April 2009 10:28:26PM *  0 points [-]

There's nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can't two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.

What other cases?

Comment author: Vladimir_Nesov 06 April 2009 01:07:24AM *  0 points [-]

There's nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can't two-box if Omega decided you would one-box.

The decision is yours, Omega only foresees it. See also: Thou Art Physics.

Any naive application will do that because the problem statement is contradictory on the surface. Before applying decision theories, the contradiction has to be resolved somehow as we work out what causes what. My original post was an attempt to do just that.

Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.

Comment author: Relsqui 22 September 2010 09:26:34PM *  0 points [-]

the problem statement ... explicitly says you can't two-box if Omega decided you would one-box.

The decision is yours, Omega only foresees it.

These stop contradicting each other if you rephrase a little more precisely. It's not that you can't two-box if Omega decided you would one-box--you just don't, because in order for Omega to have decided that, you must have also decided that. Or rather, been going to decide that--and if I understand the post you linked correctly, its point is that the difference between "my decision" and "the predetermination of my decision" is not meaningful.

As far as I can tell--and I'm new to this topic, so please forgive me if this is a juvenile observation--the flaw in the problem is that it cannot be true both that the contents of the boxes are determined by your choice (via Omega's prediction), and that the contents have already been determined when you are making your choice. The argument for one-boxing assumes that, of those contradictory premises, the first one is true. The argument for two-boxing assumes that the second one is true.

The potential flaw in my description, in turn, is whether my simplification just now ("determined by your choice via Omega") is actually equivalent to the way it's put in the problem ("determined by Omega based on a prediction of you"). I think it is, for the reasons given above, but what do I know?

(I feel comfortable enough with this explanation that I'm quite confident I must be missing something.)

Comment author: cousin_it 06 April 2009 10:14:08AM *  0 points [-]

An aspiring Bayesian rationalist would behave like me in the original post: assume some prior over the possible implementations of Omega and work out what to do. So taboo "foresee" and propose some mechanisms as I, ciphergoth and Toby Ord did.