Alicorn comments on Open Thread: February 2010 - Less Wrong

1 Post author: wedrifid 01 February 2010 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (738)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 10 February 2010 02:59:07PM 1 point [-]

Perhaps it will make sense if you view the argument as more of a reason to be the kind of person who one-boxes, rather than an argument to one-box per se.

Comment author: underling 10 February 2010 04:20:22PM 0 points [-]

That's too cryptic for me. Where's the connection to your first comment?

As i said in reply to byrnema, I don't dispute that wanting to be the kind of person who 1-boxes in iterated games or in advance is rational, but one-shot? I don't see it. What's the rationale behind it?

Comment author: Alicorn 10 February 2010 04:33:53PM 1 point [-]

You have the information that in Newcomblike problems, it is better to (already) be inclined to predictably one-box, because the game is "rigged". So, if you (now) become predictably and generally inclined to one-box, you can win at Newcomblike problems if you encounter them in the future. Even if you only ever run into one.

Of course, Omega is imaginary, so it's entirely a thought experiment, but it's interesting anyway!

Comment author: underling 10 February 2010 04:42:40PM 0 points [-]

Agree completely.

But the crucial difference is: in the one-shot case, the box is already filled or not.

Comment author: Alicorn 10 February 2010 04:44:46PM 5 points [-]

Yes. But it was filled, or not, based on a prediction about what you would do. We are not such tricksy creatures that we can unpredictably change our minds at the last minute and two-box without Omega anticipating this, so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.

Comment author: brazil84 13 February 2010 02:07:01AM 1 point [-]

I agree. I would add that situations can and do arise in real life where the other fellow can predict your behavior better than you can predict it yourself.

For example, suppose that your wife announces she is going on a health kick. She is joining a gym; she will go 4 or 5 times a week; she will eat healthy; and she plans to get back into the shape she was in 10 years ago. You might ask her what she thinks her probability of success is, and she might honestly tell you she thinks there is a 60 or 70% chance her health kick will succeed.

On the other hand, you, her husband know her pretty well and know that she has a hard time sticking to diets and such. You estimate her probability of success at no more than 10%.

Whose probability estimate is better? I would guess it's the husband's.

Well, in the Newcomb experiment, the AI is like the husband who knows you better than you know yourself. Trying to outguess and/or surprise such an entity is a huge uphill battle. So, even if you don't believe in backwards-causality, you should probably choose as if backwards causality exists.

JMHO

Comment author: Alicorn 13 February 2010 02:08:32AM 3 points [-]

you, her husband

I do not anticipate ever becoming someone's husband.

Comment author: brazil84 13 February 2010 02:10:43AM 2 points [-]

Well, it's just a hypothetical. If you like, you can switch the roles of wife and husband. Or substitute domestic partners, or anything you like :)

Comment author: Clippy 13 February 2010 03:23:10AM 1 point [-]

Neither do I. That would be stupid. Why would anyone ever want to become anyone's husband?

Comment author: Kevin 13 February 2010 05:33:45AM 0 points [-]

Maybe your wife-to-be is a wealthy heiress?

Comment author: Unknowns 13 February 2010 06:57:09AM 3 points [-]

I think Clippy's point was that becoming a husband doesn't generate paperclips.

Comment author: underling 11 February 2010 08:58:04AM 0 points [-]

so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.

If we rule out backwards causation, then why on earth should this be true???

Comment author: Jordan 11 February 2010 09:43:38AM *  1 point [-]

Imagine a simple but related scenario that involves no backwards causation:

You're a 12 year old kid, and you know your mom doesn't want you to play with your new Splogomax unless an adult is with you. Your mom leaves you alone for an hour to run to the store, telling you she'll punish you if you play with the Splogomax, and that, whether there's any evidence of it when she returns, she knows you well enough to know if you're going to play with it, although she'll refrain from passing judgement until she has just gotten back from the store.

Assuming you fear punishment more than you enjoy playing with your Splogomax, do you decide to play or not?

Edit: now I feel stupid. There's a much simpler way to get my point across. Just imagine Omega doesn't fill any box until after you've picked up one or two boxes and walked away, but that he doesn't look at your choice when filling the boxes.

Comment author: underling 11 February 2010 12:43:17PM 1 point [-]

So what is your point? That no backwards causation is involved is assumed in both cases. If this scenario is for dialectic purposes, it fails: It is equally clear, if not clearer, that my actual choice has no effect on the content of the boxes.

For what it's worth, let me reply with my own story:

Omega puts the two boxes in front of you, and says the usual. Just as you’re about to pick, I come along, grab both boxes, and run. I do this every time Omega confronts someone with his boxes, and I always do as good as a two-boxer and better than a one-boxer. You have the same choice as me: Just two-box. Why won’t you?

Comment author: Cyan 11 February 2010 02:20:20PM 2 points [-]

You have the same choice as me...

If Omega fills the boxes according to its prediction of the choice of the person being offered the boxes and not the person who ends up with the boxes, then the above statement where your argument breaks down.

Comment author: underling 11 February 2010 02:36:46PM 0 points [-]

You have the same choice as me: Take one box or both. (Or, if you assume there are no choices in this possible world because of determinism: It would be rational to 2-box, because I, the thief, do 2-box, and my strategy is dominant)

Comment author: MrHen 10 February 2010 04:35:43PM 0 points [-]

The one-shot game still has all of the information for the money in the boxes. If you walked in and picked both boxes you wouldn't be surprised by the result. If you walked in and picked one box you wouldn't be surprised by the result. Picking one box nets more money, so pick one box.

Comment author: underling 10 February 2010 04:44:20PM 0 points [-]

I deny that 1-boxing nets more money - ceteris paribus.

Comment author: thomblake 10 February 2010 04:49:15PM 5 points [-]

I deny that 1-boxing nets more money - ceteris paribus.

Then you're simply disagreeing with the problem statement. If you 1-box, you get $1M. If you 2-box, you get $1k. If you 2-box because you're considering the impossible possible worlds where you get $1.001M or $0, you still get $1k.

At this point, I no longer think you're adding anything new to the discussion.

Comment author: underling 11 February 2010 08:37:48AM 0 points [-]

I never said I could add anything new to the discussion. The problem is: judging by the comments so far, nobody here can, either. And since most experts outside this community agree on 2-boxing (ore am I wrong about this?), my original question stands.

Comment author: Alicorn 10 February 2010 04:47:19PM 3 points [-]

Ceteris ain't paribus. That's the whole point.