Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qiaochu_Yuan comments on Why one-box? - Less Wrong

7 Post author: PhilosophyStudent 30 June 2013 02:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread.

Comment author: Qiaochu_Yuan 30 June 2013 02:59:41AM *  18 points [-]

Two-boxers think that decisions are things that can just fall out of the sky uncaused. (This can be made precise by a suitable description of how two-boxers set up the relevant causal diagram; I found Anna Salamon's explanation of this particularly clear.) This is a view of how decisions work driven by intuitions that should be dispelled by sufficient knowledge of cognitive and / or computer science. I think acquiring such background will make you more sympathetic to the perspective that one should think in terms of winning agent types and not winning decisions.

I also think there's a tendency among two-boxers not to take the stakes of Newcomb's problem seriously enough. Suppose that instead of offering you a million dollars Omega offers to spare your daughter's life. Now what do you do?

Comment author: PhilosophyStudent 30 June 2013 03:14:09AM *  1 point [-]

Thanks for the reply, more interesting arguments.

Two-boxers think that decisions are things that can just fall out of the sky uncaused.

I'm not sure that's a fair description of two-boxers. Two-boxers think that the best way to model the causal effects of a decision are by intervention or something similar. At no point do two-boxers need to deny that decisions are caused. Rather, they just need to claim that the way you figure out the causal effects of an action are by intervention like modelling.

I also think there's a tendency among two-boxers not to take the stakes of Newcomb's problem seriously enough. Suppose that instead of offering you a million dollars Omega offers to spare your daughter's life. Now what do you do?

I don't claim to be a two-boxer so I don't know. But I don't think this point really undermines the strength of the two-boxing arguments.

Comment author: Qiaochu_Yuan 30 June 2013 03:30:19AM *  5 points [-]

Two-boxers think that the best way to model the causal effects of a decision are by intervention or something similar.

Yes, that's what I mean by decisions falling out of the sky uncaused. When a two-boxer models the causal effects of deciding to two-box even if Omega predicts that they one-box, they're positing a hypothetical in which Omega's prediction is wrong even though they know this to be highly unlikely or impossible depending on the setup of the problem. Are you familiar with how TDT sets up the relevant causal diagram?

But I don't think this point really undermines the strength of the arguments I outline above.

I think it undermines their attractiveness. I would say unhesitatingly that one-boxing is the correct decision in that scenario because it's the one that saves my daughter, and I would furthermore say this even if I didn't have a decision theory that returned that as the correct decision.

If I write down a long argument that returns a conclusion I know is wrong, I can conclude that there's something wrong with my argument even if I can't point to a particular step in my argument I know to be wrong.

Comment author: PhilosophyStudent 30 June 2013 03:40:49AM 0 points [-]

Yes, that's what I mean by decisions falling out of the sky uncaused. When a two-boxer models the causal effects of deciding to two-box even if Omega predicts that they one-box, they're positing a hypothetical in which Omega's prediction is wrong even though they know this to be highly unlikely or impossible depending on the setup of the problem.

The two-boxer claims that causal consequences are what matters. If this is false, the two-boxer is already in trouble but if this is true then it seems unclear (to me) that the fact that the correct way of modelling causal consequences involves interventions should be a problem. So I'm unclear as to whether there's really an independent challenge here. But I will have to think on this more so don't have anything more to say for now (and my opinion may change on further reflection as I can see why this argument feels compelling).

And yes, I'm aware of how TDT sets up the causal diagrams.

I think it undermines their attractiveness. I would say unhesitatingly that one-boxing is the correct decision in that scenario because it's the one that saves my daughter, and I would furthermore say this even if I didn't have a decision theory that returned that as the correct decision.

In response, the two-boxer would say that it isn't your decision that saves your daughter (it's your agent type) and they're not talking about agent type. Now I'm not saying they're right to say this but I don't think that this line advances the argument (I think we just end up where we were before).

Comment author: Qiaochu_Yuan 30 June 2013 03:46:49AM *  1 point [-]

Okay, but why does the two-boxer care about decisions when agent type appears to be what causes winning (on Newcomblike problems)? Your two-boxer seems to want to split so many hairs that she's willing to let her daughter die for it.

Comment author: PhilosophyStudent 30 June 2013 03:52:51AM 2 points [-]

No argument here. I'm very open to the suggestion that the two-boxer is answering the wrong question (perhaps they should be interested in rational agent type rather than rational decisions) but it is often suggested on LW that two-boxers are not answering the wrong question but rather are getting the wrong answer (that is, it is suggested that one-boxing is the rational decision, not that it is uninteresting whether this is the case).

Comment author: Qiaochu_Yuan 30 June 2013 03:57:55AM *  1 point [-]

One-boxing is the rational decision; in LW parlance "rational decision" means "the thing that you do to win." I don't think splitting hairs about this is productive or interesting.

Comment author: PhilosophyStudent 30 June 2013 04:05:36AM *  2 points [-]

I agree. A semantic debate is uninteresting. My original assumption about the differences between two-boxing philosophers and one-boxing LWers was that the two groups used words differently and were engaged in different missions.

If you think the difference is just:

(a) semantic; (b) a difference of missions; (c) a different view of which missions are important

then I agree and I also agree that a long hair splitting debate is uninteresting.

However, my impression was that some people on LW seem to think there is more than a semantic debate going on (for example, my impression was that this is what Eliezer thought). This assumption is what motivated the writing of this post. If you think this assumption is wrong, it would be great to know as if this is the case, I now understand what is going on.

Comment author: Qiaochu_Yuan 30 June 2013 04:10:44AM 3 points [-]

There is more than a semantic debate going on to the extent that two-boxers are of the opinion that if they faced an actual Newcomb's problem, then what they should actually do is to actually two-box. This isn't a disagreement about semantics but about what you should actually do in a certain kind of situation.

Comment author: PhilosophyStudent 30 June 2013 04:27:02AM 0 points [-]

Okay. Clarified, so to return to:

Okay, but why does the two-boxer care about decisions when agent type appears to be what causes winning (on Newcomblike problems)?

The two-boxer cares about decisions because they use the word decision to refer to those things we can control. So they say that we can't control our past agent type but can control our taking of the one or two boxes. Of course, a long argument can be held about what notion of "control" we should appeal to here but it's not immediately obvious to me that the two-boxer is wrong to care about decisions in their sense. So they would say that what thing we care about depends not only on what things can cause the best outcome but also on whether we can exert control over these things. The basic claim here seems reasonable enough.

Comment author: ChristianKl 02 July 2013 10:43:16AM 0 points [-]

In response, the two-boxer would say that it isn't your decision that saves your daughter (it's your agent type) and they're not talking about agent type.

I think that's again about decisions falling out of the sky. The agent type causes decisions to happen. People can't make decisions that are inconsistent with their own agent type.

Comment author: buybuydandavis 30 June 2013 08:58:39AM 0 points [-]

Two-boxers think that decisions are things that can just fall out of the sky uncaused.

Yes, every two boxer I've ever known has said exactly that a thousand times.

Comment author: framsey 01 July 2013 04:25:02PM *  1 point [-]

Two-boxers think that decisions are things that can just fall out of the sky uncaused.

But don't LW one-boxers think that decision ALGORITHMS are things that can just fall out of the sky uncaused?

As an empirical matter, I don't think humans are psychologically capable of time-consistent decisions in all cases. For instance, TDT implies that one should one-box even in a version of Newcomb's in which one can SEE the content of the boxes. But would a human being really leave the other box behind, if the contents of the boxes were things they REALLY valued (like the lives of close friends), and they could actually see their contents? I think that would be hard for a human to do, even if ex ante they might wish to reprogram themselves to do so.

Comment author: notsonewuser 05 July 2013 12:18:00PM 1 point [-]

For instance, TDT implies that one should one-box even in a version of Newcomb's in which one can SEE the content of the boxes. But would a human being really leave the other box behind, if the contents of the boxes were things they REALLY valued (like the lives of close friends), and they could actually see their contents?

Probably not, and thus s/he would probably never see the second box as anything but empty. His/her loss.

Comment author: ChristianKl 02 July 2013 12:07:16PM 0 points [-]

I think that would be hard for a human to do, even if ex ante they might wish to reprogram themselves to do so.

I think it's hard because most human's don't live their lives according to principles. They care more about the lives of close friends than they care about their principles.

In the end reprograming yourself in that way is about being a good stoic.

Comment author: [deleted] 01 July 2013 02:22:42PM 0 points [-]

Thank you for referencing Anna Salamon's diagrams. I would have one boxed in the first place, but I really think that those help make it much more clear in general.

Comment author: Dan_Moore 01 July 2013 03:47:40PM -1 points [-]

Two-boxers think that decisions are things that can just fall out of the sky uncaused.

It seems that 2-boxers make this assumption, whereas some 1-boxers (including me) apply a Popperian approach to selecting a model of reality consistent with the empirical evidence.