cousin_it comments on Formalizing Newcomb's - Less Wrong

18 Post author: cousin_it 05 April 2009 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 05 April 2009 08:30:51PM *  2 points [-]

Annoyance has it right but too cryptic: it's the other way around. If your decision theory fails on this test ground but works perfectly well in the real world, maybe you need to work some more on the test ground. For now it seems I've adequately demonstrated how your available options depend on the implementation of Omega, and look not at all like the decision theories that we find effective in reality. Good sign?

Comment author: Vladimir_Nesov 05 April 2009 08:48:47PM 1 point [-]

Annoyance has it right but too cryptic: it's the other way around. If your decision theory fails on this test ground but works perfectly well in the real world, maybe you need to work some more on the test ground.

Not quite. The failure of a strong decision theory on a test is a reason for you to start doubting the adequacy of both the test problem and the decision theory. The decision to amend one or the other must always come through you, unless you already trust something else more than you trust yourself. The paradox doesn't care what you do, it is merely a building block towards better explication of what kinds of decisions you consider correct.

Comment author: cousin_it 05 April 2009 09:00:01PM *  2 points [-]

Woah, let's have some common sense here instead of preaching. I have good reasons to trust accepted decision theories. What reason do I have to trust Newcomb's problem? Given how much in my analysis turned out to depend on the implementation of Omega, I don't trust the thing at all anymore. Do you? Why?

Comment author: Vladimir_Nesov 05 April 2009 09:08:15PM 1 point [-]

You are not asked to trust anything. You have a paradox; resolve it, understand it. What do you refer to, when using the word "trust" above?

Comment author: cousin_it 05 April 2009 09:13:27PM 0 points [-]

Uh, didn't I convince you that, given any concrete implementation of Omega, the paradox utterly disappears? Let's go at it again. What kind of Omega do you offer me?

Comment author: Vladimir_Nesov 05 April 2009 09:22:26PM 0 points [-]

The usual setting, you being a sufficiently simple mere human, not building your own Omegas in the process, going through the procedure in a controlled environment if that helps to get the case stronger, and Omega being able to predict your actual final decision, by whatever means it pleases. What the Omega does to predict your decision doesn't affect you, shouldn't concern you, it looks like only that it's usually right is relevant.

Comment author: byrnema 05 April 2009 09:53:33PM *  2 points [-]

"What the Omega does to predict your decision doesn't affect you, shouldn't concern you, it looks like only that it's usually right is relevant."

Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I'm willing to allow that in a LCW, I'm not given enough information. Is this the Newcomb "problem", then -- how to make rational decision when you're not given enough information?

Comment author: cousin_it 05 April 2009 09:31:52PM *  0 points [-]

No perfectly rational decision theory can be applied in this case, just like you can't play chess perfectly rationally with a desktop PC. Several comments above I outlined a good approximation that I would use and recommend a computer to use. This case is just... uninteresting. It doesn't raise any question marks in my mind. It should?

Comment author: byrnema 05 April 2009 09:55:00PM 2 points [-]

Can you please explain why a rational decision theory cannot be applied?

Comment author: cousin_it 05 April 2009 10:33:01PM *  0 points [-]

As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we're up against, the calculations are a little easier, e.g. in the "simulating Omega" case we just one-box without thinking.

Comment author: Eliezer_Yudkowsky 06 April 2009 11:55:16AM 2 points [-]

By that definition of "perfect rationality" no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.

Comment author: Vladimir_Nesov 06 April 2009 02:24:00PM 0 points [-]

The decision theory must allow approximations, a ranking allowing to find (recognize) as good a solution as possible, given the practical limitations.

Comment author: Vladimir_Nesov 05 April 2009 10:07:12PM *  0 points [-]

The problem setting itself shouldn't raise many questions. If you agree that the right answer in this setting is to one-box, you probably understand the test. Next, look at the popular decision theories that calculate that the "correct" answer is to two-box. Find what's wrong with those theories, or with the ways of applying them, and find a way to generalize them to handle this case and other cases correctly.

Comment author: cousin_it 05 April 2009 10:28:26PM *  0 points [-]

There's nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can't two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.

What other cases?

Comment author: Vladimir_Nesov 06 April 2009 01:07:24AM *  0 points [-]

There's nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can't two-box if Omega decided you would one-box.

The decision is yours, Omega only foresees it. See also: Thou Art Physics.

Any naive application will do that because the problem statement is contradictory on the surface. Before applying decision theories, the contradiction has to be resolved somehow as we work out what causes what. My original post was an attempt to do just that.

Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.