Follow-up to: Normative uncertainty in Newcomb's problem
Philosophers and atheists break for two-boxing; theists and Less Wrong break for one-boxing
Personally, I would one-box on Newcomb's Problem. Conditional on one-boxing for lawful reasons, one boxing earns $1,000,000, while two-boxing, conditional on two-boxing for lawful reasons, would deliver only a thousand. But this seems to be firmly a minority view in philosophy, and numerous heuristics about expert opinion suggest that I should re-examine the view.
In the PhilPapers survey, Philosophy undergraduates start off divided roughly evenly between one-boxing and two-boxing:
Newcomb's problem: one box or two boxes?
Other | 142 / 217 (65.4%) |
Accept or lean toward: one box | 40 / 217 (18.4%) |
Accept or lean toward: two boxes | 35 / 217 (16.1%) |
But philosophy faculty, who have learned more (less likely to have no opinion), and been subject to further selection, break in favor of two-boxing:
Newcomb's problem: one box or two boxes?
Other | 441 / 931 (47.4%) |
Accept or lean toward: two boxes | 292 / 931 (31.4%) |
Accept or lean toward: one box | 198 / 931 (21.3%) |
Specialists in decision theory (who are also more atheistic, more compatibilist about free will, and more physicalist than faculty in general) are even more convinced:
Newcomb's problem: one box or two boxes?
Accept or lean toward: two boxes | 19 / 31 (61.3%) |
Accept or lean toward: one box | 8 / 31 (25.8%) |
Other | 4 / 31 (12.9%) |
Looking at the correlates of answers about Newcomb's problem, two-boxers are more likely to believe in physicalism about consciousness, atheism about religion, and other positions generally popular around here (which are also usually, but not always, in the direction of philosophical opinion). Zooming in one correlate, most theists with an opinion are one-boxers, while atheists break for two-boxing:
Newcomb's problem:two boxes | 0.125 | |||||||||||||||||
Response pairs: 655 p-value: 0.001
|
Less Wrong breaks overwhelmingly for one-boxing in survey answers for 2012:
NEWCOMB'S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don't understand: 86, 7.3%
No answer: 240, 20.3%
When I elicited LW confidence levels in a poll, a majority indicated 99%+ confidence in one-boxing, and 77% of respondents indicated 80%+ confidence.
What's going on?
I would like to understand what is driving this difference of opinion. My poll was a (weak) test of the hypothesis that Less Wrongers were more likely to account for uncertainty about decision theory: since on the standard Newcomb's problem one-boxers get $1,000,000, while two-boxers get $1,000, even a modest credence in the correct theory recommending one-boxing could justify the action of one-boxing.
If new graduate students read the computer science literature on program equilibrium, including some local contributions like Robust Cooperation in the Prisoner's Dilemma and A Comparison of Decision Algorithms on Newcomblike Problems, I would guess they would tend to shift more towards one-boxing. Thinking about what sort of decision algorithms it is rational to program, or what decision algorithms would prosper over numerous one-shot Prisoner's Dilemmas with visible source code, could also shift intuitions. A number of philosophers I have spoken with have indicated that frameworks like the use of causal models with nodes for logical uncertainty are meaningful contributions to thinking about decision theory. However, I doubt that for those with opinions, the balance would swing from almost 3:1 for two-boxing to 9:1 for one-boxing, even concentrating on new decision theory graduate students.
On the other hand, there may be an effect of unbalanced presentation to non-experts. Less Wrong is on average less philosophically sophisticated than professional philosophers. Since philosophical training is associated with a shift towards two-boxing, some of the difference in opinion could reflect a difference in training. Then, postings on decision theory have almost all either argued for or assumed one-boxing as the correct response on Newcomb's problem. It might be that if academic decision theorists were making arguments for two-boxing here, or if there was a reduction in pro one-boxing social pressure, there would be a shift in Less Wrong opinion towards two-boxing.
Less Wrongers, what's going on here? What are the relative causal roles of these and other factors in this divergence?
ETA: The SEP article on Causal Decision Theory.
To clarify: everyone should agree that the winning agent is the one with the giant heap of money on the table. The question is how we attribute parts of that winning to the decision rather than other aspects of the agent (because this is the game the CDTers are playing and you said you think they are playing the game wrong, not just playing the wrong game). CDTers use the following means to attribute winning to the decision: they attribute the winning that is caused by the decision. This may be wrong and there may be room to demonstrate that this is the case but it seems unreasonable to me to describe it as "contorted" (it's actually quite a straightforward way to attribute the winning to the decision) and I think that using such descriptions skews the debate in an unreasonable way. This is basically just a repetition of my previous point so perhaps further reiteration is not of any use to either of us...
In terms of NP being "unfair", we need to be clear about what the CDTer means by this (using the word "unfair" makes it sound like the CDTer is just closing their eyes and crying). On the basic level, though, the CDTer simply mean that the agent's winning in this case isn't entirely determined by the winning that can be attributed to the decision and hence that the agent's winning is not a good guide to what decision wins. More specifically, the claim is that the agent's winning is determined in part by things that are correlated with the agent's decision but which aren't attributable to the agent's decision and so the agent's overall winning in this case is a bad guide to determining which decision wins. Obviously you would disagree with the claims they're making but this is different to claiming that CDTers think NP is unfair in some more everyday sense (where it seems absurd to think that Omega is being unfair because Omega cares only about what decision you are going to make).
I don't necessarily think the CDTers are right but I don't think the way you outline their views does justice to them.
So to summarise. On LW the story is often told as follows: CDTers don't care about winning (at least not in any natural sense) and they avoid the problems raised by NP by saying the scenario is unfair. This makes the CDTer sound not just wrong but also so foolish it's hard to understand why the CDTer exists.
But expanded to show what the CDT actually means, this becomes: CDTers agree that winning is what matters to rationality but because they're interested in rational decisions they are interested in what winning can be attributed to decisions. Specificall... (read more)