While the consensus on Less Wrong is that one boxing on Newcomb’s Problem is the rational decision, my understanding is that this opinion is not necessarily held uniformly amongst philosophers
That's correct. See, for instance, the PhilPapers Survey of 931 philosophy professors, which found that only 21% favored one boxing vs. 31% who favored two boxing; 43% said other (mostly undecided or insufficiently familiar with the issue). Among the 31 philosophers who specialize in decision theory, there was a big shift from other (down to 13%) to two boxing (u...
An issue that often occurs to me when discussing these questions. I one-box, cooperate in one-shot PD's, and pay the guy who gave me a lift out of the desert. I've no idea what decision theory I'm using when I decide to do these things, but I still know that that I'd do them. I'm pretty sure that's what most other people would do as well.
Does anyone actually know how Human Decision Theory works? I know there are examples of problems where HDT fails miserably and CDT comes out better, but is there a coherent explanation for how HDT manages to get all of th...
Almost everyone I've tried it on has one-boxed. Even though I left out the part in the description about being a really accurate predictor, and pre-seeded the boxes before I even knew who would be the one choosing.
What?!? You offer people two boxes with essentially random amounts of money in them, and they choose to take one of the boxes instead of both? And these people are otherwise completely sane?
Could you maybe give us details of how exactly you present the problem? I can't imagine any presentation that would make anyone even slightly tempted to one-box this variant. (Maybe if I knew I'd get to play again one day...)
Thanks for a great post Adam, I'm looking forward to the rest of the series.
This might be missing the point, but I just can't get past it. How does a rational agent come to believe that the being they're facing is "an unquestionably honest, all knowing agent with perfect powers of prediction"?
I have the suspicion that a lot of the bizarreness of this problem comes out of transporting our agent into an epistemologically unattainable state.
Is there a way to phrase a problem of this type in a way that does not require such a state?
It's not perfect, per se, but try this:
There's a fellow named James Omega who (with the funding of certain powerful philosophy departments), travels around the country offering random individuals the chance to participate in Newcomb's problem, with James as Omega. Rather than scanning your brain with his magic powers, he spends a day observing you in your daily life, and uses this info to make his decision. Here's the catch: he's done this 300 times, and never once mis-predicted. He's gone up against philosophers and lay-people, people that knew they were being observed and people that didn't, but it makes no difference: he just has an intuition that good. When it comes time to do the experiment, it's set up in such a way that you can be totally sure (and other very prestigous parties have verified) that the amounts in the box do not change after your decision.
So when you're selected, what do you do? Nothing quite supernatural is going on, we just have the James fellow with an amazing track record, and you with no particular reason to believe that you'll be his first failure. Even if he is just human, isn't it rational to assume the ridiculously likely thing (301/302 chance according to Laplace's Law) that he'll guess you correctly? Even if we adjust for the possibility of error, the payoff matrix is still so lopsided that it seems crazy to two-box.
See if that helps, and of course everyone else is free to offer improvements if I've missed something. You know, help get this Least Convenient Possible World going.
Please link to previous discussions of Newcomb's Problem on LW. They contain many valuable insights that new readers will otherwise have to regenerate (possibly poorly).
A kind of funny way in which something like this might (just about) happen in reality occurs to me: Possible time delay in human awareness of decision making. Suppose when you make a conscious decision, your brain starts to become committed to that decision before you become aware of it, so if you suddenly decide to press a button then your brain was going through the process of committing you to pressing it before you actually knew you were going to press it. That would mean that every time you took a conscious decision to act, based on some rational grou...
I'm considering continuing this sequence on an external blog. There's been some positive responses to these posts but there are also a lot of people who plainly consider that the quality of the posts aren't up to scratch. Moving them to an external site would let people follow them if they wanted to but would stop me from bombarding LW with another five or six posts.
Opinions?
Does decision theory still matter in a world where there's an agent who's already predicted your choices? Once Omega exists, "decision" is the wrong word - it's really a discovery mechanism for your actions.
And a Less Wrong wiki article on the problem with further links.
At first, I Thought It Meant that you'd add more links, but that's a bad idea, and here's an article on why.
all knowing agent with perfect powers of prediction
The existence of an all-knowing agent with perfect powers of prediction makes a mockery of the very idea of causality, at least as I understand it. (I won't go into details here, because it doesn't really matter, as you'll see.) Obviously causal decision theory doesn't work if causality doesn't make sense. However, since I assign negligible probability to the existence of such a being, I can still think that CDT is correct for practical purposes, while remembering that it can break down in extreme si...
I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff.
I wonder if it is possible to go one more step: instead of asking which decision theory to use (to make decisions), we should ask which meta-decision theory we should use (to choose decision theories). In that case, maybe we would find ourselves using EDT for Newcomb-like problems (and winning...
Causal Decision Theory isn't fatally flawed in this case, it's simply harder to properly apply.
A sufficiently advanced superintelligence could perfectly replicate you or I in a simulation. In fact, I can't currently concieve of a more reliable method of prediction.
Which is where the explanation comes in for Causal Decision Theory. You may be the simulation, if you are the simulation then which box you take DOES affect what is in the boxes.
Remember that decision theoryTheory tells us to calculate the expected utility
Typo here.
I would also like to see subheadings for "causal says" and "evidential says", probably changing "Decision theory and Newcomb’s problem" just to make it neat. That would make the flow of the text readable at a glance.
Since you are making posts that would be intended to be linked to it is worth spending extra time getting the details right.
Your link in the appendix goes to the wrong place. Presumably you meant this: http://plato.stanford.edu/entries/decision-causal/
Newcomb's problem proves EDT only by cheating.
Before It presents you with the problem, Omega tests whether you subscribe to CDT or EDT, and puts the million in the box iff you subscribe to EDT. So you'll get more if you subscribe to EDT. So you'll be better off applying heuristics that you're arbitrarily rewarded for, but this doesn't say anything about normal situations (like kissing the sick baby.)
This is part of a sequence titled, "Introduction to decision theory"
The previous post is "An introduction to decision theory"
In the previous post I introduced evidential and causal decision theories. The principle question that needs resolving with regards to these is whether using these decision theories leads to making rational decisions. The next two posts will show that both causal and evidential decision theories fail to do so and will try to set the scene so that it’s clear why there’s so much focus given on Less Wrong to developing new decision theories.
Newcomb’s Problem
Newcomb’s Problem asks us to imagine the following situation:
Omega, an unquestionably honest, all knowing agent with perfect powers of prediction, appears, along with two boxes. Omega tells you that it has placed a certain sum of money into each of the boxes. It has already placed the money and will not now change the amount. You are then asked whether you want to take just the money that is in the left hand box or whether you want to take the money in both boxes.
However, here’s where it becomes complicated. Using its perfect powers of prediction, Omega predicted whether you would take just the left box (called “one boxing”) or whether you would take both boxes (called “two boxing”).Either way, Omega put $1000 in the right hand box but filled the left hand box as follows:
If he predicted you would take only the left hand box, he put $1 000 000 in the left hand box.
If he predicted you would take both boxes, he put $0 in the left hand box.
Should you take just the left hand box or should you take both boxes?
An answer to Newcomb’s Problem
One argument goes as follows: By the time you are asked to choose what to do, the money is already in the boxes. Whatever decision you make, it won’t change what’s in the boxes. So the boxes can be in one of two states:
Whichever state the boxes are in, you get more money if you take both boxes than if you take one. In game theoretic terms, the strategy of taking both boxes strictly dominates the strategy of taking only one box. You can never lose by choosing both boxes.
The only problem is, you do lose. If you take two boxes then they are in state 1 and you only get $1000. If you only took the left box you would get $1 000 000.
To many people, this may be enough to make it obvious that the rational decision is to take only the left box. If so, you might want to skip the next paragraph.
Taking only the left box didn’t seem rational to me for a long time. It seemed that the reasoning described above to justify taking both boxes was so powerful that the only rational decision was to take both boxes. I therefore saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff. From that perspective, it is rational to use a decision theory that suggests you only take the left box because that is the decision theory that leads to the highest payoff. Taking only the left box lead to a higher payoff and it’s also a rational decision if you ask, “What decision theory is it rational for me to use?” and then make your decision according to the theory that you have concluded it is rational to follow.
What follows will presume that a good decision theory should one box on Newcomb’s problem.
Causal Decision Theory and Newcomb’s Problem
Remember that decision theory tells us to calculate the expected utility of an action by summing the utility of each possible outcome of that action multiplied by its probability. In Causal Decision Theory, this probability is defined causally (something that we haven’t formalized and won’t formalise in this introductory sequence but which we have at least some grasp of). So Causal Decision Theory will act as if the probability that the boxes are in state 1 or state 2 above is not influenced by the decision made to one or two box (so let’s say that the probability that the boxes are in state 1 is P and the probability that they’re in state 2 is Q regardless of your decision).
So if you undertake the action of choosing only the left box your expected utility will be equal to: (0 x P) + (1 000 000 x Q) = 1 000 000 x Q
And if you choose both boxes, the expected utility will be equal to: (1000 x P) + (1 001 000 x Q).
So Causal Decision Theory will lead to the decision to take both boxes and hence, if you accept that you should one box on Newcomb’s Problem, Causal Decision Theory is flawed.
Evidential Decision Theory and Newcomb’s Problem
Evidential Decision Theory, on the other hand, will take your decision to one box as evidence that Omega put the boxes in state 2, to give an expected utility of (1 x 1 000 000) + (0 x 0) = 1 000 000.
It will similarly take your decision to take both boxes as evidence that Omega put the boxes into state 1, to give an expected utility of (0 x (1 000 000 + 1000)) + (1 x (0 + 1000)) = 1000
As such, Evidential Decision Theory will suggest that you one box and hence it passes the test posed by Newcomb’s Problem. We will look at a more challenging scenario for Evidential Decision Theory in the next post. For now, we’re part way along the route of realising that there’s still a need to look for a decision theory that makes the logical decision in a wide range of situations.
Appendix 1: Important notes
While the consensus on Less Wrong is that one boxing on Newcomb’s Problem is the rational decision, my understanding is that this opinion is not necessarily held uniformly amongst philosophers (see, for example, the Stanford Encyclopedia of Philosophy’s article on Causal Decision Theory). I’d welcome corrections on this if I’m wrong but otherwise it does seem important to acknowledge where the level of consensus differs on Less Wrong compared to the broader community.
For more details on this, see the results of the PhilPapers Survey where 61% of respondents who specialised in decision theory chose to two box and only 26% chose to one box (the rest were uncertain). Thanks to Unnamed for the link.
If Newcomb's Problem doesn't seem realistic enough to be worth considering then read the responses to this comment.
Appendix 2: Existing posts on Newcomb's Problem
Newcomb's Problem has been widely discussed on Less Wrong, generally by people with more knowledge on the subject than me (this post is included as part of the sequence because I want to make sure no-one is left behind and because it is framed in a slightly different way). Good previous posts include:
A post by Eliezer introducing the problem and discussing the issue of whether one boxing is irrational.
A link to Marion Ledwig's detailed thesis on the issue.
An exploration of the links between Newcomb's Problem and the prisoner's dillemma.
A post about formalising Newcomb's Problem.
And a Less Wrong wiki article on the problem with further links.