Qiaochu_Yuan comments on Why one-box? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (95)
Yes, that's what I mean by decisions falling out of the sky uncaused. When a two-boxer models the causal effects of deciding to two-box even if Omega predicts that they one-box, they're positing a hypothetical in which Omega's prediction is wrong even though they know this to be highly unlikely or impossible depending on the setup of the problem. Are you familiar with how TDT sets up the relevant causal diagram?
I think it undermines their attractiveness. I would say unhesitatingly that one-boxing is the correct decision in that scenario because it's the one that saves my daughter, and I would furthermore say this even if I didn't have a decision theory that returned that as the correct decision.
If I write down a long argument that returns a conclusion I know is wrong, I can conclude that there's something wrong with my argument even if I can't point to a particular step in my argument I know to be wrong.
The two-boxer claims that causal consequences are what matters. If this is false, the two-boxer is already in trouble but if this is true then it seems unclear (to me) that the fact that the correct way of modelling causal consequences involves interventions should be a problem. So I'm unclear as to whether there's really an independent challenge here. But I will have to think on this more so don't have anything more to say for now (and my opinion may change on further reflection as I can see why this argument feels compelling).
And yes, I'm aware of how TDT sets up the causal diagrams.
In response, the two-boxer would say that it isn't your decision that saves your daughter (it's your agent type) and they're not talking about agent type. Now I'm not saying they're right to say this but I don't think that this line advances the argument (I think we just end up where we were before).
Okay, but why does the two-boxer care about decisions when agent type appears to be what causes winning (on Newcomblike problems)? Your two-boxer seems to want to split so many hairs that she's willing to let her daughter die for it.
No argument here. I'm very open to the suggestion that the two-boxer is answering the wrong question (perhaps they should be interested in rational agent type rather than rational decisions) but it is often suggested on LW that two-boxers are not answering the wrong question but rather are getting the wrong answer (that is, it is suggested that one-boxing is the rational decision, not that it is uninteresting whether this is the case).
One-boxing is the rational decision; in LW parlance "rational decision" means "the thing that you do to win." I don't think splitting hairs about this is productive or interesting.
I agree. A semantic debate is uninteresting. My original assumption about the differences between two-boxing philosophers and one-boxing LWers was that the two groups used words differently and were engaged in different missions.
If you think the difference is just:
(a) semantic; (b) a difference of missions; (c) a different view of which missions are important
then I agree and I also agree that a long hair splitting debate is uninteresting.
However, my impression was that some people on LW seem to think there is more than a semantic debate going on (for example, my impression was that this is what Eliezer thought). This assumption is what motivated the writing of this post. If you think this assumption is wrong, it would be great to know as if this is the case, I now understand what is going on.
There is more than a semantic debate going on to the extent that two-boxers are of the opinion that if they faced an actual Newcomb's problem, then what they should actually do is to actually two-box. This isn't a disagreement about semantics but about what you should actually do in a certain kind of situation.
Okay. Clarified, so to return to:
The two-boxer cares about decisions because they use the word decision to refer to those things we can control. So they say that we can't control our past agent type but can control our taking of the one or two boxes. Of course, a long argument can be held about what notion of "control" we should appeal to here but it's not immediately obvious to me that the two-boxer is wrong to care about decisions in their sense. So they would say that what thing we care about depends not only on what things can cause the best outcome but also on whether we can exert control over these things. The basic claim here seems reasonable enough.
Yes, and then their daughters die. Again, if a long argument outputs a conclusion you know is wrong, you know there's something wrong with the argument even if you don't know what it is.
It's not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can't control their agent type at this past time so they don't try to. It's unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don't think we're really progressing things here.
I think that's again about decisions falling out of the sky. The agent type causes decisions to happen. People can't make decisions that are inconsistent with their own agent type.