11

I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.

One-boxer: You should one-box because one-boxing wins (that is, a person that one-boxes ends up better off than a person that two-boxes). Not only does it seem clear that rationality should be about winning generally (that a rational agent should not be systematically outperformed by irrational agents) but Newcomb's problem is normally discussed within the context of instrumental rationality, which everyone agrees is about winning.

Me: I get that and that's one of the main reasons I'm sympathetic to the one-boxing view but the two-boxers has a response to these concerns. The two-boxer agrees that rationality is about winning and they agree that winning means ending up with the most utility. The two-boxer should also agree that the rational decision theory to follow is one that will one-box on all future Newcomb's problems (those where the prediction has not yet occurred) and can also agree that the best timeless agent type is a one-boxing type. However, the two-boxer also claims that two-boxing is the rational decision.

O: Sure, but why think they're right? After all, two-boxers don't win.

M: Okay, those with a two-boxing agent type don't win but the two-boxer isn't talking about agent types. They're talking about decisions. So they are interested in what aspects of the agent's winning can be attributed to their decision and they say that we can attribute the agent's winning to their decision if this is caused by their decision. This strikes me as quite a reasonable way to apportion the credit for various parts of the winning. (Of course, it could be said that the two-boxer is right but they are playing a pointless game and should instead be interested in winning simpliciter rather than winning decisions. If this is the claim then the argument is dissolved and there is no disagreement. But I take it this is not the claim).

O: But this is a strange convoluted definition of winning. The agent ends up worse off than one-boxing agents so it must be a convoluted definition of winning that says that two-boxing is the winning decision.

M: Hmm, maybe... But I'm worried that relevant distinctions aren't being made here (you've started talking about winning agents rather than winning decisions). The two-boxer relies on the same definition of winning as you and so agrees that the one-boxing agent is the winning agent. They just disagree about how to attribute winning to the agent's decisions (rather than to other features of the agent). And their way of doing this strikes me as quite a natural one. We credit the decision with the winning that it causes. Is this the source of my unwillingness to jump fully on board with your program? Do we simply disagree about the plausibility of this way of attributing winning to decisions?

Meta-comment (a): I don't know what to say here? Is this what's going on? Do people just intuitively feel that this is a crazy way to attribute winning to decisions? If so, can anyone suggest why I should adopt the one-boxer perspective on this?

O: But then the two-boxer has to rely on the claim that Newcomb's problem is "unfair" to explain why the two-boxing agent doesn't win. It seems absurd to say that a scenario like Newcomb's problem is unfair.

M: Well, the two-boxing agent means something very particular by "unfair". They simply mean that in this case the winning agent doesn't correspond to the winning decision. Further, they can explain why this is the case without saying anything that strikes me as crazy. They simply say that Newcomb's problem is a case where the agent's winnings can't entirely be attributed to the agent's decision (ignoring a constant value). But if something else (the agent's type at time of prediction) also influences the agent's winning in this case, why should it be a surprise that the winning agent and the winning decision come apart? I'm not saying the two-boxer is right here but they don't seem to me to be obviously wrong either...

Meta-comment (b): Interested to know what response should be given here.

O: Okay, let's try something else. The two-boxer focuses only on causal consequences but in doing so they simply ignore all the logical non-causal consequences of their decision algorithm outputting a certain decision. This is an ad hoc, unmotivated restriction.

M: Ah hoc? I'm not sure I see why. Think about the problem with evidential decision theory. The proponent of EDT could say a similar thing (that the proponent of two-boxing ignores all the evidential implications of their decision). The two-boxer will respond that these implications just are not relevant to decision making. When we make decisions we are trying to bring about the best results, not get evidence for these results. Equally, they might say, we are trying to bring about the best results, not derive the best results in our logical calculations. Now I don't know what to make of the point/counter-point here but it doesn't seem to me that the one-boxing view is obviously correct here and I'm worried that we're again going to end up just trading intuitions (and I can see the force of both intuitions here).

Meta-comment: Again, I would love to know whether I've understood this argument and whether something can be said to convince me that the one-boxing view is the clear cut winner here.

End comments: That's my understanding of the primary argument advanced for one-boxing on LW. Are there other core arguments? How can these arguments be improved and extended?
New Comment
98 comments, sorted by Click to highlight new comments since: Today at 12:56 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Two-boxers think that decisions are things that can just fall out of the sky uncaused. (This can be made precise by a suitable description of how two-boxers set up the relevant causal diagram; I found Anna Salamon's explanation of this particularly clear.) This is a view of how decisions work driven by intuitions that should be dispelled by sufficient knowledge of cognitive and / or computer science. I think acquiring such background will make you more sympathetic to the perspective that one should think in terms of winning agent types and not winning decisions.

I also think there's a tendency among two-boxers not to take the stakes of Newcomb's problem seriously enough. Suppose that instead of offering you a million dollars Omega offers to spare your daughter's life. Now what do you do?

2framsey11y
But don't LW one-boxers think that decision ALGORITHMS are things that can just fall out of the sky uncaused? As an empirical matter, I don't think humans are psychologically capable of time-consistent decisions in all cases. For instance, TDT implies that one should one-box even in a version of Newcomb's in which one can SEE the content of the boxes. But would a human being really leave the other box behind, if the contents of the boxes were things they REALLY valued (like the lives of close friends), and they could actually see their contents? I think that would be hard for a human to do, even if ex ante they might wish to reprogram themselves to do so.
1[anonymous]11y
Probably not, and thus s/he would probably never see the second box as anything but empty. His/her loss.
0[anonymous]11y
Probably not, and thus they would probably never see the second box as anything but empty. His/her loss.
0ChristianKl11y
I think it's hard because most human's don't live their lives according to principles. They care more about the lives of close friends than they care about their principles. In the end reprograming yourself in that way is about being a good stoic.
2PhilosophyStudent11y
Thanks for the reply, more interesting arguments. I'm not sure that's a fair description of two-boxers. Two-boxers think that the best way to model the causal effects of a decision are by intervention or something similar. At no point do two-boxers need to deny that decisions are caused. Rather, they just need to claim that the way you figure out the causal effects of an action are by intervention like modelling. I don't claim to be a two-boxer so I don't know. But I don't think this point really undermines the strength of the two-boxing arguments.
8Qiaochu_Yuan11y
Yes, that's what I mean by decisions falling out of the sky uncaused. When a two-boxer models the causal effects of deciding to two-box even if Omega predicts that they one-box, they're positing a hypothetical in which Omega's prediction is wrong even though they know this to be highly unlikely or impossible depending on the setup of the problem. Are you familiar with how TDT sets up the relevant causal diagram? I think it undermines their attractiveness. I would say unhesitatingly that one-boxing is the correct decision in that scenario because it's the one that saves my daughter, and I would furthermore say this even if I didn't have a decision theory that returned that as the correct decision. If I write down a long argument that returns a conclusion I know is wrong, I can conclude that there's something wrong with my argument even if I can't point to a particular step in my argument I know to be wrong.
0PhilosophyStudent11y
The two-boxer claims that causal consequences are what matters. If this is false, the two-boxer is already in trouble but if this is true then it seems unclear (to me) that the fact that the correct way of modelling causal consequences involves interventions should be a problem. So I'm unclear as to whether there's really an independent challenge here. But I will have to think on this more so don't have anything more to say for now (and my opinion may change on further reflection as I can see why this argument feels compelling). And yes, I'm aware of how TDT sets up the causal diagrams. In response, the two-boxer would say that it isn't your decision that saves your daughter (it's your agent type) and they're not talking about agent type. Now I'm not saying they're right to say this but I don't think that this line advances the argument (I think we just end up where we were before).
2Qiaochu_Yuan11y
Okay, but why does the two-boxer care about decisions when agent type appears to be what causes winning (on Newcomblike problems)? Your two-boxer seems to want to split so many hairs that she's willing to let her daughter die for it.
2PhilosophyStudent11y
No argument here. I'm very open to the suggestion that the two-boxer is answering the wrong question (perhaps they should be interested in rational agent type rather than rational decisions) but it is often suggested on LW that two-boxers are not answering the wrong question but rather are getting the wrong answer (that is, it is suggested that one-boxing is the rational decision, not that it is uninteresting whether this is the case).
1Qiaochu_Yuan11y
One-boxing is the rational decision; in LW parlance "rational decision" means "the thing that you do to win." I don't think splitting hairs about this is productive or interesting.
2PhilosophyStudent11y
I agree. A semantic debate is uninteresting. My original assumption about the differences between two-boxing philosophers and one-boxing LWers was that the two groups used words differently and were engaged in different missions. If you think the difference is just: (a) semantic; (b) a difference of missions; (c) a different view of which missions are important then I agree and I also agree that a long hair splitting debate is uninteresting. However, my impression was that some people on LW seem to think there is more than a semantic debate going on (for example, my impression was that this is what Eliezer thought). This assumption is what motivated the writing of this post. If you think this assumption is wrong, it would be great to know as if this is the case, I now understand what is going on.
5Qiaochu_Yuan11y
There is more than a semantic debate going on to the extent that two-boxers are of the opinion that if they faced an actual Newcomb's problem, then what they should actually do is to actually two-box. This isn't a disagreement about semantics but about what you should actually do in a certain kind of situation.
0PhilosophyStudent11y
Okay. Clarified, so to return to: The two-boxer cares about decisions because they use the word decision to refer to those things we can control. So they say that we can't control our past agent type but can control our taking of the one or two boxes. Of course, a long argument can be held about what notion of "control" we should appeal to here but it's not immediately obvious to me that the two-boxer is wrong to care about decisions in their sense. So they would say that what thing we care about depends not only on what things can cause the best outcome but also on whether we can exert control over these things. The basic claim here seems reasonable enough.
2Qiaochu_Yuan11y
Yes, and then their daughters die. Again, if a long argument outputs a conclusion you know is wrong, you know there's something wrong with the argument even if you don't know what it is.
2PhilosophyStudent11y
It's not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can't control their agent type at this past time so they don't try to. It's unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don't think we're really progressing things here.
2Qiaochu_Yuan11y
But if they didn't think this, then their daughters could live. You don't think, in this situation, you would even try to stop thinking this way? I'm trying to trigger a shut up and do the impossible intuition here, but if you insist on splitting hairs, then I agree that this conversation won't go anywhere.
2PhilosophyStudent11y
Yes, if the two boxer had a different agent type in the past then their daughters would live. No disagreement there. But I don't think I'm splitting hairs by thinking this doesn't immediately imply that one-boxing is the rational decision (rather, I think you're failing to acknowledge the possibility of potentially relevant distinctions). I'm not actually convinced by the two-boxing arguments but I don't think they're as obviously flawed as you seem to. And yes, I think we now agree on one thing at least (further conversation will probably not go anywhere) so I'm going to leave things at that.
1CoffeeStain11y
As the argument goes, you can't control your past selves, but that isn't the form of the experiment. The only self that you're controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer). See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That's right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn't matter how he does it, all that matters is whether or not your current self decides to one box. To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you're a one-boxer? In any simulation he could run of your brain, the simulated you could just know it's a simulation and then Omega wouldn't get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life. The point? How do you know that current you isn't the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?
0ChristianKl11y
I think that's again about decisions falling out of the sky. The agent type causes decisions to happen. People can't make decisions that are inconsistent with their own agent type.
0[anonymous]11y
Thank you for referencing Anna Salamon's diagrams. I would have one boxed in the first place, but I really think that those help make it much more clear in general.
0buybuydandavis11y
Yes, every two boxer I've ever known has said exactly that a thousand times.
-2Dan_Moore11y
It seems that 2-boxers make this assumption, whereas some 1-boxers (including me) apply a Popperian approach to selecting a model of reality consistent with the empirical evidence.

Basically: EDT/UDT has simple arguments in its favor and seems to perform well. There don't seem to be any serious arguments in favor of CDT, and the human intuition in its favor seems quite debunkable. So it seems like the burden of proof is on CDT, to justify why it isn't crazy. It may be that CDT has met that burden, but I'm not aware of it.

A. The dominance arguments in favor of two-boxing seem quite weak. They tend to apply verbatum to playing prisoner's dilemmas against a mirror (If the mirror cooperates you'd prefer defect, if the mirror defects you'd prefer defect, so regardless of the state of nature you'd prefer defect). So why do you not accept the dominance argument for a mirror, but accept it in the case of Newcomb-like problems? To discriminate the cases it seems you need to make an assumption of no causal connection, or a special role for time, in your argument.

This begs the question terribly---why is a causal connection privileged? Why is the role of time privileged? As far as I can tell these two things are pretty arbitrary and unimportant. I'm not aware of any strong philosophical arguments for CDT, besides "it seems intuitively sensible to a human," and... (read more)

I'm happy to learn that you consider UDT a variant of EDT, because after thinking about these issues for awhile my current point of view is that some form of EDT is obviously the correct thing to do, but in standard examples of EDT failing the relevant Bayesian updates are being performed incorrectly. The problem is that forcing yourself into a reference class by performing an action doesn't make it reasonable for you to reason as if you were a random sample from that reference class, because you aren't: you introduced a selection bias. Does this agree with your thoughts?

2Robert_Unwin11y
"why is a causal connection privileged?" I agree with everything here. What follows is merely history. Historically, I think that CDT was meant to address the obvious shortcomings of choosing to bring about states that were merely correlated with good outcomes (as in the case of whitening one's teeth to reduce lung cancer risk). When Pearl advocates CDT, he is mainly advocating acting based on robust connections that will survive the perturbation of the system caused by the action itself. (e.g. Don't think you'll cure lung cancer by making your population brush their teeth, because that is a non-robust correlation that will be eliminated once you change the system). The centrality of causality in decision making was obvious intuitively but wasn't reflected in formal Bayesian decision theory. This was because of the lack of a good formalism linking probability and causality (and some erroneous positivistic scruples against the very idea of causality). Pearl and SGS's work on causality has done much to address this, but I think there is much to be done. There is a very annoying historical accident where EDT was taken to be the 'one-boxing' decision theory. First, any use of probability theory in the NP with infallible predictor is suspicious, because the problem can be specified in a logically complete way with no room for empirical uncertainty. (This is why dominance reasoning is brought in for CDT. What should the probabilities be?). Second, EDT is not easy to make coherent given an agent who knows they follow EDT. (The action that EDT disfavors will have probability zero and so the agent cannot condition on it in traditional probability theory). Third, EDT just barely one-boxes. It doesn't one-box on Double Transparent Newcomb, nor on Counterfactual Mugging. It's also obscure what it does on PD. (Again, I can play the PD against a selfish clone of myself, with both agents having each other's source code. There is no empirical uncertainty here, and so applying pro
0Protagoras11y
I wonder if David Lewis (perhaps the most notorious philosophical two-boxer) was skeptical that any human had a sufficiently strong self-model. I think there are very who few have better self-models than he did, so it's quite interesting if he did think this. His discussion of the "tickle defence" in his paper "Causal Decision Theory" may point that way.

There are no two-boxers in foxholes.

The intuition pump that got me to be a very confident one-boxer is the idea of submitting computer code that makes a decision, rather than just making a decision.

In this version, you don't need an Omega - you just need to run the program. It's a lot more obvious that you ought to submit a program that one-boxes than it is obvious that you ought to one-box. You can even justify this choice on causal decision-theory grounds.

With the full Newcomb problem, the causality is a little weird. Just think of yourself as a computer program with partial self-awareness. Deciding whether to one-box or two-box updates the "what kind of decision-making agent am I" node, which also caused Omega to either fill or not fill the opaque box.

Yes, it's wonky causality - usually the future doesn't seem to affect the past. Omega is just so unlikely that given that you're talking to Omega, you can justify all sorts of slightly less unlikely things.

1PhilosophyStudent11y
Okay. As a first point, it's worth noting that the two-boxer would agree that you should submit one-boxing code because they agree that one-boxing is the rational agent type. However, they would disagree that one-boxing is the rational decision. So I agree that this is a good intuition pump but it is not one that anyone denies. But you go further, you follow this claim up by saying that we should think of causation in Newcomb's problem as being a case where causality is weird (side note: Huw Price presents an argument of this sort, arguing for a particular view of causation in these cases). However, I'm not sure I feel any "intuition pump" force here (I don't see why I should just intuitively find these claims plausible).
1ThrustVectoring11y
Running one-boxing code is analogous to showing Omega your decision algorithm and then deciding to one-box. If you think you should run code that one-boxes, then by analogy you should decide to one-box.
0PhilosophyStudent11y
Yes. Personally, I think the analogy is too close to pump intuitions (or it doesn't pump my intuitions though perhaps this is just my failure). The two-boxer will say that if you can choose what code to submit, you should submit one-boxing code but that you shouldn't later run this code. This is the standard claim that you should precommit to one-boxing but should two-boxing in Newcomb's problem itself.
6Creutzer11y
But the very point is that you can't submit one piece of code and run another. You have to run what you submitted. That, again, is the issue that decisions don't fall from the sky uncaused. The reason why CDT can't get Newcomb's right is that due to its use of surgery on the action node, it cannot conceive of its own choice as predetermined. You are precommitted already just in virtue of what kind of agent/program you are.
1PhilosophyStudent11y
Yes. So the two-boxer says that you should precommit to later making an irrational decision. This does not require them to say that the decision you are precommitting to is later rational. So the two-boxer would submit the one-boxing code despite the fact that one unfortunate effect of this would be that they would later irrationally run the code (because there are other effects which counteract this). I'm not saying your argument is wrong (nor am I saying it's right). I'm just saying that the analogy is too close to the original situation to pump intuitions. If people don't already have the one-boxing intuition in Newcomb's problem then the submitting code analogy doesn't seem to me to make things any clearer.
2pjeby11y
I think the piece that this hypothetical two-boxer is missing is that they are acting as though the problem is cheating, or alternatively, that the premises can be cheated. That is, that you are able to make a decision that wasn't predictable beforehand. If your decision is predictable, two boxing is irrational, even considered as a single decision. Try this analogy: instead of predicting your decision in advance, Omega simply scans your brain to determine what to put in the boxes, at the very moment you make the decision. Does your hypothetical two-boxer still argue that one-boxing in this scenario is "irrational"? If so, I cannot make sense of their answer. But if not, then the burden falls on the two boxer to explain how this scenario is any different from a prediction made a fraction of a millisecond sooner. How far before or after the point of decision does the decision become "rational" or "irrational" in their mind? (I use quotes here because I cannot think of any coherent definition of those terms that's still consistent with the hypothetical usage.)
1PhilosophyStudent11y
The two-boxer never assumes that the decision isn't predictable. They just say that the prediction can no longer be influenced and so you may as well gain the $1000 from the transparent box. In terms of your hypothetical scenario, the question for the two-boxer will be whether the decision causally influences the result of this brain scan. If yes, then, the two-boxer will one-box (weird sentence). If no, the two-boxer will two-box.
4pjeby11y
How would it not causally influence the brain scan? Are you saying two-boxers can make decisions without using their brains? ;-) In any event, you didn't answer the question I asked, which was at what point in time does the two-boxer label the decision "irrational". Is it still "irrational" in their estimation to two-box, in the case where Omega decides after they do? Notice that in both cases, the decision arises from information already available: the state of the chooser's brain. So even in the original Newcomb's problem, there is a causal connection between the chooser's brain state and the boxes' contents. That's why I and other people are asking what role time plays: if you are using the correct causal model, where your current brain state has causal influence over your future decision, then the only distinction two-boxers can base their "irrational" label on is time, not causality. The alternative is to argue that it is somehow possible to make a decision without using your brain, i.e., without past causes having any influence on your decision. You could maybe do that by flipping a coin, but then, is that really a "decision", let alone "rational"? If a two-boxer argues that their decision cannot cause a past event, they have the causal model wrong. The correct model is one of a past brain state influencing both Omega's decision and your own future decision. For me, the simulation argument made it obvious that one-boxing is the rational choice, because it makes clear that your decision is algorithmic. "Then I'll just decide differently!" is, you see, still a fixed algorithm. There is no such thing as submitting one program to Omega and then running a different one, because you are the same program in both cases -- and it's that program that is causal over both Omega's behavior and the "choice you would make in that situation". Separating the decision from the deciding algorithm is incoherent. As someone else mentioned, the only way the two-boxer's statem
0PhilosophyStudent11y
Time is irrelevant to the two-boxer except as a proof of causal independence so there's no interesting answer to this question. The two-boxer is concerned with causal independence. If a decision cannot help but causally influence the brain scan then the two-boxer would one-box. Two-boxers use a causal model where your current brain state has causal influence on your future decisions. They are interested in the causal effects of the decision not the brain state and hence the causal independence criterion does distinguish the cases in their view and they need not appeal to time. They have the right causal model. They just disagree about which downstream causal effects we should be considering. No-one denies this. Everyone agrees about what the best program is. They just disagree about what this means about the best decision. The two-boxer says that unfortunately the best program leads us to make a non-optimal decision which is a shame (but worth it because the benefits outweigh the cost). But, they say, this doesn't change the fact that two-boxing is the optimal decision (while acknowledging that the optimal program one-boxes). I suspect that different two-boxers would respond differently as anthropic style puzzles tend to elicit disagreement. Well, they're saying that the optimal algorithm is a one-boxing algorithm while the optimal decision is two-boxing. They can explain why as well (algorithms have different causal effects to decisions). There is no immediate contradiction here (it would take serious argument to show a contradiction like, for example, an argument showing that decisions and algorithms are the same thing). For example, imagine a game where I choose a colour and then later choose a number between 1 and 4. With regards to the number, if you pick n, you get $n. With regards to the colour, if you pick red, you get $0, if you pick blue you get $5 but then don't get a choice about the number (you are presumed to have picked 1). It is not contradictor
3pjeby11y
Taboo "optimal". The problem here is that this "optimal" doesn't cash out to anything in terms of real world prediction, which means it's alberzle vs. bargulum all over again. A and B don't disagree about predictions of what will happen in the world, meaning they are only disagreeing over which definition of a word to use. In this context, a two boxer has to have some definition of "optimal" that doesn't cash out the same as LWers cash out that word. Because our definition is based on what it actually gets you, not what it could have gotten you if the rules were different. And what you just described is a decision algorithm, and it is that algorithm which Omega will use as input to decide what to put in the boxes. "Decide to use algorithm X" is itself an algorithm. This is why it's incoherent to speak of a decision independently - it's always being made by an algorithm. "Just decide" is a decision procedure, so there's actually no such thing as "just choosing for this occasion". And, given that algorithm, you lose on Newcomb's problem, because what you described is a two-boxing decision algorithm: if it is ever actually in the Newcomb's problem situation, an entity using that decision procedure will two-box, because "the prediction has occurred". It is therefore trivial for me to play the part of Omega here and put nothing under the box when I play against you. I don't need any superhuman predictive ability, I just need to know that you believe two boxing is "optimal" when the prediction has already been made. If you think that way, then your two-boxing is predictable ahead of time, and there is no temporal causation being violated. Barring some perverse definition of "optimal", you can't think two-boxing is coherent unless you think that decisions can be made without using your brain - i.e. that you can screen off the effects of past brain state on present decisions. Again, though, this is alberzle vs bargulum. It doesn't seem there is any argument about the

So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too.

Only if you like money.

The optimal thing would be to have Omega think that you will one-box, but you actually two box. You'd love to play Omega for a fool, but the problem explicitly tells you that you can't, and that Omega can somehow predict you.

Omega has extremely good predictions. if you've set your algorithm in such a state that Omega will predict that you one-box, you will be unable to do anything but one-box - your neurons are set in place, causal lines have already insured your decision, and free will doesn't exist in the sense that you can change your decision after the fact.

0Decius11y
In the strictest sense, that requires breaking the speed barrier to information. Otherwise I'm going to bring in a cosmic ray detector and two box iff the time between the second and third detection is less than the time between the first and second.

The problem is no free lunch. Any decision theory is going to fail somewhere. The case for privileging Newcomb as a success goal over all other considerations has not, in fact, been made.

[-][anonymous]11y110

So I raised this problem too, and I got a convincing answer to it. The way I raised it was to say that it isn't fair to fault CDT for failing to maximise expected returns in Newcomb's problem, because Newcomb's problem was designed to defeat CDT and we can design a problem to defeat any decision theory. So that can't be a standard.

The response I got (at least, my interpretation of it) was this: It is of course possible to construct a problem in which any decision theory is defeated, but not all such problems are equal. We can distinguish in principle between problems that can defeat any decision procedure (such as 'omega gives you an extra million for not using X', where X is the decision procedure you wish to defeat) and problems which defeat certain decision procedures but cannot be constructed so as to defeat others. Call the former type 1 problems, and the latter type 2 problems. Newcomb's problem is a type 2 problem, as is the prisoner's dilemma against a known psychological twin. Both defeat CDT, but not TDT, and cannot be constructed so as to defeat TDT without becoming type 1. TDT is aimed (though I think not yet successful) at being able to solve all type 2 problems.

So if ... (read more)

0Decius11y
Can you construct a problem that defeats TDT that cannot be constructed to defeat CDT? (I think I can- The Pirates' problem against psychological twins).
0[anonymous]11y
No, I don't have any such thing in mind. Could you explain how TDT and CDT get different results?
0Decius11y
The CDT result is pretty well known: the first pirate gets almost everything. The TDT result is hard for me, but if the first pirate gets anything then more than half of the other pirates had a strategy that could be trivially improved.

[Saying same thing as everyone else, just different words. Might work better, might not.]

Suppose once Omega explains everything to you, you think 'now either the million dollars are there or aren't and my decision doesn't affect shit.' True, your decision now doesn't affect it - but your 'source code' (neural wiring) contains the information 'will in this situation think thoughts that support two-boxing and accept them.' So, choosing to one-box is the same as being the type of agent who'll one-box.
The distinction between agent type and decision is artifici... (read more)

0PhilosophyStudent11y
Two-boxing definitely entails that you are a two-boxing agent type. That's not the same claim as the claim that the decision and the agent type are the same thing. See also my comment here. I would be interested to know your answer to my questions there (particularly the second one).
1Ronak11y
When I said 'A and B are the same,' I meant that it is not possible for one of A and B to have a different truth-value from the other. Two-boxing entails you are a two-boxer, but being a two-boxer also entails that you'll two-box. But let me try and convince you based on your second question, treating the two as at least conceptually distinct. Imagine a hypothetical time when people spoke about statistics in terms of causation rather than correlation (and suppose no one had done Pearl's work). As you can imagine, the paradoxes would write themselves. At one point, someone would throw up his/her arms and tell everyone to stop talking about causation. And then the causalists would rebel, because causality is a sacred idea. The correlators would reply probably by constructing a situation where a third, unmeasured C caused both A and B. Newcomb's is that problem for decision theory. CDT is in a sense right when it says one-boxing doesn't cause there to be a million dollars in the box, that what does cause the money to be there is being a one-boxer. But, it ignores the fact that the same thing that caused there to be the million dollars also causes you to one-box - so, there may not be a causal link there very definitely is a correlation. 'C causing both A and B' is an instance of the simplest and most intuitive way in which correlation can be not causation, and CDT fails. EDT is looking at correlations between decisions and consequences and using that to decide. Aside: You're right, though, that the LW idea of a decision is somewhat different from the CDT idea. You define it as "a proposition that the agent can make true or false at will." That definition has this really enormous black box called will - and if Omega has an arbitrarily high predictive accuracy, then it must be the case that that black box is a causal link going from Omega's raw material for prediction (brain state) to decision. CDT, when it says that you ought to only look at causal arrows that begin a

My problem with causal decision theory is that it treats the past different from the future for no good reason. If you read the quantum physics sequence, particularly the part about timeless physics, you will find that time is most likely not even an explicit dimension. The past is more likely to be known, but it's not fundamentally different from the future.

The probability of box A having money in it is significantly higher given that you one box then the probability given that you do not. What more do you need to know?

0PhilosophyStudent11y
This seems like an interesting point. If either time or causation doesn't work in the way we generally tend to think it does then the intuitions in favour of CDT fall pretty quickly. However, timeless physics is hardly established science and various people are not very positive about the QM sequence. So while this seems interesting I don't know that it helps me personally to come to a final conclusion on the matter.
8DanielLC11y
Consider this altered form of the problem: Omega offers you two boxes. One is empty and the other has one thousand dollars. He offers you a choice of taking just the empty box or both boxes. If you just take the empty box, he will put a million dollars in it. You decide that you can't change the big bang, and given the big bang his choice of whether or not to put a million dollars in the box is certain, so you can't influence his decision to put the money in the box. As such, you might as well take both boxes. How can you have control over the future but not the past if the two are correlated?

Does Omega one-box against Omega?

Here is another way to think about this problem.

Imagine if instead of Omega you were on a futuristic game show. As you go onto the show, you enter a future-science brain scanner that scans your brain. After scanning, the game show hosts secretly put the money into the various boxes behind stage.

You now get up on stage and choose whether to one or two box.

Keep in mind that before you got up on the show, 100 other contestants played the game that day. All of the two-boxers ended up with less money than the one-boxers. As an avid watcher of the show, you c... (read more)

0Decius11y
I disagree. Just because Rock lost every time it was played doesn't mean that it's inferior to Paper or Scissors, to use a trivial example.
0Sly11y
I disagree. If rock always lost when people used it, that would be evidence against using rock. Just like if you flip a coin 1000000 times and keep getting heads that is evidence of a coin that won't be coming up tails anytime soon.
-1Decius11y
Playing your double: Evidence that your opponent will not use rock is evidence that you should not use paper. If you don't use rock, and don't use paper, then you must use scissors and tie with your opponent who followed the same reasoning. Updating on evidence that rock doesn't win when it is used means rock wins. EDIT: consider what you would believe if you tried to call a coin a large number of times and were always right. Then consider what you would believe if you were always wrong.
-4Sly11y
"Rock lost every time it was played " "rock doesn't win when it is used means rock wins." One of these things is not like the other.
1Decius11y
Those aren't both things that I said. For rock to lose consistently means that somebody isn't updating properly, or is using a failing strategy, or a winning strategy. For example, if I tell my opponent "I'm going to play only paper", and I do, rock will always lose when played. That strategy can still win over several moves, if I am not transparent; all I have to do is correctly predict that my opponent will predict that the current round is the one in which I change my strategy. If they believe (through expressed preferences, assuming that they independently try to win each round) that rock will lose against me, rock will win against them.
-6Sly11y

Didn't we have a thread about this really recently?

Anyhow, to crib from the previous thread - an important point is reflective equilibrium. I shouldn't be able to predict that I'll do badly - if I know that, and the problem is "fair" in that it's a decision-determined, I can just make the other decision. Or if I'm doing things a particular way, and I know that another way of doing things would be better, and the problem is "fair" in that I can choose how to do things, I can just do things the better way. To sit and stew and lose anyho... (read more)

2Nornagest11y
Yeah, CarlSchulman put up a couple of threads on Newcomb a couple weeks ago, here and here. The original Newcomb's Problem and Regret of Rationality thread has also been getting some traffic recently. Offhand I don't see anything in this thread that hasn't been covered by those, but I may be missing relevant subtleties; I don't find this debate especially interesting past the first few rounds.

Okay, those with a two-boxing agent type don't win but the two-boxer isn't talking about agent types. They're talking about decisions. So they are interested in what aspects of the agent's winning can be attributed to their decision and they say that we can attribute the agent's winning to their decision if this is caused by their decision. This strikes me as quite a reasonable way to apportion the credit for various parts of the winning.

Do I understand it correctly that you're trying to evaluate the merits of a decision (to two-box) in isolation of the decision procedure that produced it? Because that's simply incoherent if the payoffs of the decision depend on your decision procedure.

To put it succinctly, Omega knows me far better than I know myself. I'm not going to second guess him/her.

The case of CDT vs Newcomb-like problems to me has a lot of similarity with the different approaches to probability theory.
In CDT you are considering only one type of information, i.e. causal dependency, to construct decision trees. This is akin to define probability as the frequency of some process, so that probability relations become causal ones. Other approaches like TDT construct decisions using causal and logical dependency, as the induction logic approach to probability does.
The Newcomb is not designed to be "unfair" to CDT, it is designed... (read more)

Okay, those with a two-boxing agent type don't win but the two-boxer isn't talking about agent types. They're talking about decisions.

The problem doesn't care whether you are the type of agent who talks about agent types or the type of agent who talks about decisions. The problem only cares about which actions you choose.

0Creutzer11y
The problem does care about what kind of agent you are, because that's what determined Omega's prediction. It's just that kinds of agents are defined by what you (would) do in certain situations.
2[anonymous]11y
Right. If you can be a one-boxer without one-boxing, that's obviously what you do. Problem is, Omega is a superintelligence and you aren't.
0Creutzer11y
I don't see how being a superintelligence would help. Even a superintelligence can't do logically impossible things: you can't be a one-boxer without one-boxing, because one-boxing is what constitutes being a one-boxer.
1[anonymous]11y
Omega is just a superintelligence. Presumably, he can't see the future and he's not omniscient; so it's hypothetically possible to trick him, to make him think you'll one-box when in reality you're going to two-box. I'm not sure if I have the vocabulary yet to solve the problem of identity vs. action, and I study philosophy, not decision theory, so for me that's a huge can of worms. (I've already had to prevent myself from connecting the attempted two-boxer distinction between 'winning' and 'rational' to Nietzsche's idea of a Hinterwelt -- but that's totally something that could be done, by someone less averse to sounding pretentious.) But I think that, attempting to leave the can closed, the distinction I drew above between one-boxing and being a one-boxer really refers to the distinction between actually one-boxing when it comes time to open the box and making Omega think you'll one-box -- which may or may not be identical to making Omega think you're the sort of person who will one-box. And the problem I raised above is that nobody's managed to trick him yet, so by simple induction, it's not reasonable to bet a million dollars on your being able to succeed where everyone else failed. So maybe the superintelligence thing doesn't even enter into it...? (Would it make a difference if it were just a human game show, that still displayed the same results? Would anyone one-box for Omega but two-box in the game show?)

Consider the following two mechanisms for a Newcomb-like problem.

A. T-Omega offers you the one or two box choice. You know that T-Omega used a time machine to see if you picked one or two boxes, and used that information to place/not place the million dollars.

B. C-Omega offers you the one or two box choice. You know that C-Omega is con man, that pretends great predictive powers on each planet he visits. Usually he fails, but on Earth he gets lucky. C-Omega uses a coin flip to place/not place the million dollars.

I claim the correct choice is to one box T... (read more)

0shminux11y
There is a contradiction here between "lucky" and "coin flip". Why does he get lucky on Earth? In the original problem Omega runs a simulation of you, which is equivalent to T-Omega.
0anotherblackhat11y
I don't see the contradiction. C-Omega tries the same con on billions and billions of planets, and it happens that out of those billions of trials, on Earth his predictions all came true. Asking why Earth is rather like asking why Regina Jackson won the lottery - it was bound to happen somewhere, where ever that was you could ask the same question. I could not find the word "simulation" mentioned in any of the summaries nor the full restatements that are found on LessWrong, in particular Newcomb's problem. Nor was I able to find that word in the formulation as it appeared in Martin Gardner's column published in Scientific American, nor in the rec.puzzles archive. Perhaps it went by some other term? Can you cite something that mentions simulation as the method used (or for that matter, explicitly states any method Omega uses)?

[Two boxers] are interested in what aspects of the agent's winning can be attributed to their decision and they say that we can attribute the agent's winning to their decision if this is caused by their decision. This strikes me as quite a reasonable way to apportion the credit for various parts of the winning.

What do you mean by "the agent's winning can be attributed to their decision"? The agent isn't winning! Calling losing winning strikes me as a very unreasonable way to apportion credit for winning.

It would be helpful to me if you defined... (read more)

0PhilosophyStudent11y
I was using winning to refer to something that comes in degrees. The basic idea is that each agent ends up with a certain amount of utility (or money) and the question is which bits of this utility can you attribute to the decision. So let's say you wanted to determine how much of this utility you can attribute to the agent having blue hair. How would you do so? One possibility (that used by the two-boxer) is that you ask what causal effect the agent's blue hair had on the amount of utility received. This doesn't seem an utterly unreasonable way of determining how the utility received should be attributed to the agent's hair type.
0Strilanc11y
I still don't follow. The causal effect of two-boxing is getting 1000$ instead of 1000000$. That's bad. How are you interpreting it, so that it's good? Because they're following a rule of thumb that's right under different circumstances?
0PhilosophyStudent11y
One-boxers end up with 1 000 000 utility Two-boxers end up with 1 000 utility So everyone agrees that one-boxers are the winning agents (1 000 000 > 1 000) The question is, how much of this utility can be attributed to the agent's decision rather than type. The two-boxer says that to answer this question we ask about what utility the agent's decision caused them to gain. So they say that we can attribute the following utility to the decisions: One-boxing: 0 Two-boxing: 1000 And the following utility to the agent's type (there will be some double counting because of overlapping causal effects): One-boxing type: 1 000 000 Two-boxing type: 1 000 So the proponent of two-boxing says that the winning decision is two-boxing and the winning agent type is a one-boxing type. I'm not interpreting it so that it's good (for a start, I'm not necessarily a proponent of this view, I'm just outlining it). All I'm discussing is the two-boxer's response to the accusation that they don't win. They say they are interested not in winning agents but winning decisions and that two boxing is the winning decision (because 1000 > 0).
4Robert_Unwin11y
The LW approach has focused on finding agent types that win on decision problems. Lots of the work has been in trying to formalize TDT/UDT, providing sketches of computer programs that implement these informal ideas. Having read a fair amount of the philosophy literature (including some of the recent stuff by Egan, Hare/Hedden and others), I think that this agent/program approach has been extremely fruitful. It has not only given compelling solutions to a large number of problems in the literature (Newcomb's, trivial coordination problems like Stag Hunt that CDT fails on, PD playing against a selfish copy of yourself) but it also has elucidated the deep philosophical issues that the Newcomb Problem dramatizes (concerning pre-commitment, free will / determinism and uncertainty about purely apriori/logical question). The focus on agents as programs has brought to light the intricate connection between decision making, computability and logic (esp. Godelian issues) --- something merely touched on in the philosophy literature. These successes provide a sufficient reason to push the agent-centered approach (even if there were no compelling foundational argument that the 'decision' centered approach was incoherent). Similarly, I think there is no overwhelming foundational argument for Bayesian probability theory but philosophers should study it because of its fruitfulness in illuminating many particular issues in the philosophy of science and the foundations of statistics (not to mention its success in practical machine learning and statistics). This response may not be very satisfying but I can only recommend the UDT posts (http://wiki.lesswrong.com/wiki/Updateless_decision_theory) and the recent MIRI paper http://intelligence.org/files/RobustCooperation.pdf.) Rough arguments against the decision-centered approach: Point 1 Suppose I win the lottery after playing 10 times. My decision of which numbers to pick on the last lottery was the cause of winning money. (Where
0PhilosophyStudent11y
Generally agree. I think there are good arguments for focusing on decision types rather than decisions. A few comments: Point 1: That's why rationality of decisions is evaluated in terms of expected outcome, not actual outcome. So actually, it wasn't just your agent type that was flawed here but also your decisions. But yes, I agree with the general point that agent type is important. Point 2: Agree Point 3: Yes. I agree that there could be ways other than causation to attribute utility to decisions and that these ways might be superior. However, I also think that the causal approach is one natural way to do this and so I think claims that the proponent of two-boxing doesn't care about winning are false. I also think it's false to say they have a twisted definition of winning. It may be false but I think it takes work to show that (I don't think they are just obviously coming up with absurd definitions of winning).
2Creutzer11y
That's the wrong question, because it presupposes that the agent's decision and type are separable.
0PhilosophyStudent11y
By decision, the two-boxer means something like a proposition that the agent can make true or false at will (decisions don't need to be analysed in terms of propositions but it makes the point fairly clearly). In other words, a decision is a thing that an agent can bring about with certainty. By agent type, in the case of Newcomb's problem, the two-boxer is just going to mean *the thing that Omega based their prediction on". Let's say the agent's brain state at the time of prediction. Why think these are the same thing? If these are the same thing, CDT will one-box. Given that, is there any reason to think that the LW view is best presented as requiring a new decision theory rather than as requiring a new theory of what constitutes a decision?
1Creutzer11y
They are not the same thing, but they aren't independent. And they are not only causally dependent, but logically - which is why CDT intervention at the action node, leaving the agent-type node untouched, makes no sense. CDT behaves as if it were possible to be one agent type for the purpose of Omega's prediction, and then take an action corresponding to another agent type, even though that is logically impossible. CDT is unable to view its own action as predetermined, but its action is predetermined by the algorithm that is the agent. TDT can take this into account and reason with it, which is why it's such a beautiful idea.
0Strilanc11y
In that case: the two-boxer isn't just wrong, they're double-wrong. You can't just come up with some related-but-different function ("caused gain") to maximize. The problem is about maximizing the money you receive, not "caused gain". For example, I've seen some two-boxers justify two-boxing as a moral thing. They're willing to pay 999000$ for the benefit of throwing being predicted in the predictors face, somehow. Fundamentally, they're making the same mistake: fighting the hypothetical by saying the payoffs are different than what was stated in the problem.
-2PhilosophyStudent11y
The two-boxer is trying to maximise money (utility). They are interested in the additional question of which bits of that money (utility) can be attributed to which things (decisions/agent types). "Caused gain" is a view about how we should attribute the gaining of money (utility) to different things. So they agree that the problem is about maximising money (utility) and not "caused gain". But they are interested in not just which agents end up with the most money (utility) but also which aspects of those agents is responsible for them receiving the money. Specifically, they are interested in whether the decisions the agent makes are responsible for the money they receive. This does not mean they are trying to maximise something other than money (utility). It means they are interested in maximising money and then also in how you can maximise money via different mechanisms.
2Robert_Unwin11y
An additional point (discussed intelligence.org/files/TDT.pdf‎) is that CDT seems to recommend modifying oneself to a non-CDT based decision theory. (For instance, imagine that the CDTer contemplates for a moment the mere possibility of encountering NPs and can cheaply self-modify). After modification, the interest in whether decisions are responsible causally for utility will have been eliminated. So this interest seems extremely brittle. Agents able to modify and informed of the NP scenario will immediately lose the interest. (If the NP seems implausible, consider the ubiquity of some kind of logical correlation between agents in almost any multi-agent decision problem like the PD or stag hunt). Now you may have in mind a two-boxer notion distinct from that of a CDTer. It might be fundamental to this agent to not forgo local causal gains. Thus a proposed self-modification that would preclude acting for local causal gains would always be rejected. This seems like a shift out of decision theory into value theory. (I think it's very plausible that absent typical mechanisms of maintaining commitments, many humans would find it extremely hard to resist taking a large 'free' cash prize from the transparent box. Even prior schooling in one-boxing philosophy might be hard to stick to when face to face with the prize. Another factor that clashes with human intuitions is the predictor's infallibility. Generally, I think grasping verbal arguments doesn't "modify" humans in the relevant sense and that we have strong intuitions that may (at least in the right presentation of the NP) push us in the direction of local causal efficacy.) EDIT: fixeds some typos.
0Qiaochu_Yuan11y
To many two-boxers, this isn't the question. At least some two-boxing proponents in the philosophical literature seem to distinguish between winning decisions and rational decisions, the contention being that winning decisions can be contingent on something stupid about the universe. For example, you could live in a universe that specifically rewards agents who use a particular decision theory, and that says nothing about the rationality of that decision theory.
2PhilosophyStudent11y
I'm not convinced this is actually the appropriate way to interpret most two-boxers. I've read papers that say things that sound like this claim but I think the distinction that it generally being gestured at is the distinction I'm making here (with different terminology). I even think we get hints of that with the last sentence of your post where you start to talk about agent's being rewards for their decision theory rather than their decision.
[-][anonymous]11y-10

The one problem I had with Yudkowsky's TDT paper (which I didn't read very attentively, mind you, so correct me if I'm wrong) was the part where he staged a dramatic encounter where a one-boxer was pleading with a wistful two-boxing agent who wished he was a one-boxer to change his algorithm to choose just one box. It occurred to me that even if the two-boxer listened to her, then his algorithm would have been altered by totally external factors. For the superintelligence setting up the problem to have predicted his change of mind, he would have had to sim... (read more)

[This comment is no longer endorsed by its author]Reply