I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.
In that case: the two-boxer isn't just wrong, they're double-wrong. You can't just come up with some related-but-different function ("caused gain") to maximize. The problem is about maximizing the money you receive, not "caused gain".
For example, I've seen some two-boxers justify two-boxing as a moral thing. They're willing to pay 999000$ for the benefit of throwing being predicted in the predictors face, somehow. Fundamentally, they're making the same mistake: fighting the hypothetical by saying the payoffs are different than what was stated in the problem.
The two-boxer is trying to maximise money (utility). They are interested in the additional question of which bits of that money (utility) can be attributed to which things (decisions/agent types). "Caused gain" is a view about how we should attribute the gaining of money (utility) to different things.
So they agree that the problem is about maximising money (utility) and not "caused gain". But they are interested in not just which agents end up with the most money (utility) but also which aspects of those agents is responsible for them r... (read more)