Comment author: Cyan 11 February 2010 04:07:20PM *  1 point [-]

What do you mean?

I mean that as part of the specification of the problem, Omega has all the information necessary to determine what you will choose before you know yourself. There are causal arrows that descend from the situation specified by that information to (i) your choice, and (ii) the contents of the box.

why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.

You stated that "the game is rigged". The reasoning behind 2-boxing ignores that fact. In common parlance, a rigged game is unwinnable, but this game is knowably winnable. So go ahead and win without worrying about whether the choice has the label "rational" attached!

Comment author: underling 11 February 2010 04:24:25PM 1 point [-]

Sadly, we seem to make no progress in any direction. Thanks for trying.

Comment author: Cyan 11 February 2010 03:45:24PM *  2 points [-]

Yes, that's true. Now chase "however obtained" up a level -- after all, you have all the information necessary to do so.

Comment author: underling 11 February 2010 03:57:49PM 0 points [-]

What do you mean? It could have created and run a copy, for instance, but anyhow, there would be no causal link. That's probably the whole point of the 2-Boxer-majority.

I can see a rationale behind one-boxing, and it might even be a standoff, but why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.

Comment author: Cyan 11 February 2010 03:17:09PM *  1 point [-]

No. The method's output depends on its input, which by hypothesis is a specification of the situation that includes all the information necessary to determine the output of the individual's decision algorithm. Hence the decision algorithm is a causal antecedant of the contents of the boxes.

Comment author: underling 11 February 2010 03:41:31PM 0 points [-]

I mean, the actual token, the action, the choice, the act of my choosing does not determine the contents. It's Omega's belief (however obtained) that this algorithm is such-and-such that lead it to fill the boxes accordingly.

Comment author: Cyan 11 February 2010 02:45:26PM 2 points [-]

It's better for the thief to two-box because it isn't the thief's decision algorithm that determined the contents of the boxes.

Comment author: underling 11 February 2010 03:09:07PM 0 points [-]

Is it not rather Omega's undisclosed method that determines the contens? That seems to make all the difference.

Comment author: Cyan 11 February 2010 02:20:20PM 2 points [-]

You have the same choice as me...

If Omega fills the boxes according to its prediction of the choice of the person being offered the boxes and not the person who ends up with the boxes, then the above statement where your argument breaks down.

Comment author: underling 11 February 2010 02:36:46PM 0 points [-]

You have the same choice as me: Take one box or both. (Or, if you assume there are no choices in this possible world because of determinism: It would be rational to 2-box, because I, the thief, do 2-box, and my strategy is dominant)

Comment author: byrnema 11 February 2010 12:54:16PM *  0 points [-]

My point of view is that the winning thing to do here and the logical thing to do are the same.

If you want to understand my point of view or if you want me to understand your point of view, you need to tell me where you think logical and winning diverge. Then I tell you why I think they don't, etc.

You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb. How comfortable are you with the assumption of determinism? (If you're not, how do you reconcile that Omega is a perfect predictor?)

Comment author: underling 11 February 2010 02:09:01PM 0 points [-]

You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb.

Only to rule it out as a solution. No problem here.

How comfortable are you with the assumption of determinism?

In general, very. Concerning Newcomb, I don't think it's essential, and as far as I recall, it isn't mentioned in the orginal problem.

you need to tell me where you think logical and winning diverge

I'll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).

Here we go: it's not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved. It's not sufficient, because you can be lucky. You can win a game even if you're not perfectly rational.

1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.

Comment author: Jordan 11 February 2010 09:43:38AM *  1 point [-]

Imagine a simple but related scenario that involves no backwards causation:

You're a 12 year old kid, and you know your mom doesn't want you to play with your new Splogomax unless an adult is with you. Your mom leaves you alone for an hour to run to the store, telling you she'll punish you if you play with the Splogomax, and that, whether there's any evidence of it when she returns, she knows you well enough to know if you're going to play with it, although she'll refrain from passing judgement until she has just gotten back from the store.

Assuming you fear punishment more than you enjoy playing with your Splogomax, do you decide to play or not?

Edit: now I feel stupid. There's a much simpler way to get my point across. Just imagine Omega doesn't fill any box until after you've picked up one or two boxes and walked away, but that he doesn't look at your choice when filling the boxes.

Comment author: underling 11 February 2010 12:43:17PM 1 point [-]

So what is your point? That no backwards causation is involved is assumed in both cases. If this scenario is for dialectic purposes, it fails: It is equally clear, if not clearer, that my actual choice has no effect on the content of the boxes.

For what it's worth, let me reply with my own story:

Omega puts the two boxes in front of you, and says the usual. Just as you’re about to pick, I come along, grab both boxes, and run. I do this every time Omega confronts someone with his boxes, and I always do as good as a two-boxer and better than a one-boxer. You have the same choice as me: Just two-box. Why won’t you?

Comment author: Alicorn 10 February 2010 04:44:46PM 5 points [-]

Yes. But it was filled, or not, based on a prediction about what you would do. We are not such tricksy creatures that we can unpredictably change our minds at the last minute and two-box without Omega anticipating this, so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.

Comment author: underling 11 February 2010 08:58:04AM 0 points [-]

so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.

If we rule out backwards causation, then why on earth should this be true???

Comment author: byrnema 10 February 2010 04:55:43PM *  1 point [-]

By rational, I think you mean logical. (We tend to define 'rational' as 'winning' around here.*)

... and -- given a certain set of assumptions -- it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn't incorrect, it's just unproductive in finding the solution to this dilemma.)

But consider what other logical assumptions have already snuck into the logic above. We're not familiar with outcomes that depend upon our decision algorithm, we're not used to optimizing over this action. The productive direction to think along is this one: unlike a typical situation, the content of the boxes depends upon your algorithm that outputs the choice, only indirectly on your choice.

You're halfway to the solution of this problem if you can see both ways of thinking about the problem as reasonable. You'll feel some frustration that you can alternate between them -- like flip-flopping between different interpretations of an optical illusion -- and they're contradictory. Then the second half of the solution is to notice that you can choose which way to think about the problem as a willful choice -- make the choice that results in the win. That is the rational (and logical) thing to do.

Let me know if you don't agree with the part where you're supposed to see both ways of thinking about the problem as reasonable.


* But the distinction doesn't really matter because we haven't found any cases where rational and logical aren't the same thing.

Comment author: underling 11 February 2010 08:54:41AM 0 points [-]

May I suggest again that defining rational as winning may be the problem?

Comment author: thomblake 10 February 2010 04:49:15PM 5 points [-]

I deny that 1-boxing nets more money - ceteris paribus.

Then you're simply disagreeing with the problem statement. If you 1-box, you get $1M. If you 2-box, you get $1k. If you 2-box because you're considering the impossible possible worlds where you get $1.001M or $0, you still get $1k.

At this point, I no longer think you're adding anything new to the discussion.

Comment author: underling 11 February 2010 08:37:48AM 0 points [-]

I never said I could add anything new to the discussion. The problem is: judging by the comments so far, nobody here can, either. And since most experts outside this community agree on 2-boxing (ore am I wrong about this?), my original question stands.

View more: Next