Qiaochu_Yuan comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong

15 Post author: CarlShulman 19 June 2013 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 19 June 2013 05:42:11AM *  6 points [-]

The "practical" question is whether you in fact expect there to be things in the universe that specifically punish TDT agents. Omega in Newcomb's problem is doing something that plausibly is very general, namely attempting to predict the behavior of other agents: this is plausibly a general thing that agents in the universe do, as opposed to specifically punishing TDT agents.

TDT also isn't perfect; Eliezer has examples of (presumably, in his eyes, fair) problems where it gives the wrong answer (although I haven't worked through them myself).

Comment author: [deleted] 19 June 2013 06:01:58AM *  2 points [-]

Omega in Newcomb's problem is doing something that plausibly is very general

This seems to be the claim under dispute, and the question of fairness should be distinguished from the claim that Omega is doing something realistic or unrealistic. I think we agree that Newcomb-like situations are practically possible. But it may be that my unfair game is practically possible too, and that in principle no decision theory can come out maximizing utility in every practically possible game.

One response might be to say Newcomb's problem is more unfair than the problem of simply choosing between two boxes containing different amounts of money, because Newcomb's distribution of utility makes mention of the decision. Newcomb's is unfair because it goes meta on the decider. My TDT punishing game is much more unfair than Newcomb's because it goes one 'meta' level up from there, making mention of the decision theories.

You could argue that even if no decision theory can maximise in every arbitrarily unfair game, there are degrees of unfairness related to the degree to which the problem 'goes meta'. We should just prefer the decision theory that can maximise the at the highest level of unfairness. This could probably be supported by the observation that while all these unfair games are practically possible, the more unfair a game is the less likely we are to encounter it outside of a philosophy paper. You could probably come up with a formalization of unfairness, though it might be tricky to argue that it's relevantly exhaustive and linear.

EDIT: (Just a note, you could argue all this without actually granting that my unfair game is practically possible, or that Newcomb's problem is unfair, since the two-boxer will provide those premises.)

Comment author: FeepingCreature 19 June 2013 11:11:19AM 3 points [-]

A theory that is incapable of dealing with agents that make decisions based on the projected reactions of other players, is worthless in the real world.

Comment author: Decius 20 June 2013 02:39:08AM 0 points [-]

However, an agent that makes decisions based on the fact that it perfectly predicts the reactions of other players does not exist in the real world.

Comment author: FeepingCreature 20 June 2013 04:54:57AM 1 point [-]

Newcomb does not require a perfect predictor.

Comment author: Decius 20 June 2013 05:13:05AM 0 points [-]

I know that the numbers in the canonical case work out to .5005 accuracy for the required; within noise of random.