I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.
But don't LW one-boxers think that decision ALGORITHMS are things that can just fall out of the sky uncaused?
As an empirical matter, I don't think humans are psychologically capable of time-consistent decisions in all cases. For instance, TDT implies that one should one-box even in a version of Newcomb's in which one can SEE the content of the boxes. But would a human being really leave the other box behind, if the contents of the boxes were things they REALLY valued (like the lives of close friends), and they could actually see their contents? I think that would be hard for a human to do, even if ex ante they might wish to reprogram themselves to do so.
Probably not, and thus s/he would probably never see the second box as anything but empty. His/her loss.