I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.
The optimal thing would be to have Omega think that you will one-box, but you actually two box. You'd love to play Omega for a fool, but the problem explicitly tells you that you can't, and that Omega can somehow predict you.
Omega has extremely good predictions. if you've set your algorithm in such a state that Omega will predict that you one-box, you will be unable to do anything but one-box - your neurons are set in place, causal lines have already insured your decision, and free will doesn't exist in the sense that you can change your decision after the fact.
In the strictest sense, that requires breaking the speed barrier to information. Otherwise I'm going to bring in a cosmic ray detector and two box iff the time between the second and third detection is less than the time between the first and second.