cousin_it comments on Formalizing Newcomb's - Less Wrong

18 Post author: cousin_it 05 April 2009 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: byrnema 05 April 2009 09:55:00PM 2 points [-]

Can you please explain why a rational decision theory cannot be applied?

Comment author: cousin_it 05 April 2009 10:33:01PM *  0 points [-]

As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we're up against, the calculations are a little easier, e.g. in the "simulating Omega" case we just one-box without thinking.

Comment author: Eliezer_Yudkowsky 06 April 2009 11:55:16AM 2 points [-]

By that definition of "perfect rationality" no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.

Comment author: cousin_it 06 April 2009 01:31:34PM *  0 points [-]

Yes, it's true. Perfectly playing any non-mathematical "real world" game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.

Comment author: Vladimir_Nesov 06 April 2009 02:24:00PM 0 points [-]

The decision theory must allow approximations, a ranking allowing to find (recognize) as good a solution as possible, given the practical limitations.

Comment author: cousin_it 06 April 2009 02:36:06PM *  0 points [-]

You are reasoning from the faulty assumption that "surely it's possible to formalize the problem somehow and do something". The problem statement is self-contradictory. We need to resolve the contradiction. It's only possible by making some part of the problem statement false. That's what the prior over Omegas is for. We've been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 "the money is already there anyway" has become a plain simple lie, and in case 2 "Omega has already predicted your choice" becomes a lie when you're inside Omega. I say the real world doesn't have contradictions. Don't ask me to reason approximately from contradictory assumptions.

Comment author: Vladimir_Nesov 06 April 2009 02:48:15PM 0 points [-]

You gotta decide something, faced with the situation. It doesn't look like you argue that Newcomb's test itself literally can't be set up. So what do you mean by contradictions? The physical system itself can't be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.

Comment author: cousin_it 06 April 2009 04:02:24PM *  0 points [-]

The physical system can't be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.

Comment author: Vladimir_Nesov 06 April 2009 04:40:29PM 0 points [-]

Omega doesn't lie by the statement of the problem. It doesn't even assert anything, it just places the money in the box or doesn't.

Comment author: cousin_it 06 April 2009 04:42:18PM *  0 points [-]

What's wrong with you? If Omega tells us the conditions of the experiment (about "foretelling" and stuff), then Omega is lying. If someone else, then someone else. Let's wrap this up, I'm sick.

Comment author: Vladimir_Nesov 06 April 2009 04:48:38PM 0 points [-]

As was pointed out numerous times, it well may be possible to foretell your actions, even by some variation on just reading this forum and looking what people claim to choose in the given situation. That you came up with specific examples that ridicule the claim of being able to predict your decision, doesn't mean that there literally is no way to do that. Another, more detailed example, is what you listed as (2) simulation approach.

Comment author: cousin_it 06 April 2009 05:05:57PM *  0 points [-]

some variation on just reading this forum and looking what people claim to choose in the given situation

Case 3, "terminating Omega", demonstrable contradiction.

Another, more detailed example, is what you listed as (2) simulation approach.

I already explained where a "simulator Omega" has to lie to you.

Sorry, I don't want to spend any more time on this discussion. Goodbye.