Okay. As a first point, it's worth noting that the two-boxer would agree that you should submit one-boxing code because they agree that one-boxing is the rational agent type. However, they would disagree that one-boxing is the rational decision. So I agree that this is a good intuition pump but it is not one that anyone denies.
But you go further, you follow this claim up by saying that we should think of causation in Newcomb's problem as being a case where causality is weird (side note: Huw Price presents an argument of this sort, arguing for a particular view of causation in these cases). However, I'm not sure I feel any "intuition pump" force here (I don't see why I should just intuitively find these claims plausible).
it's worth noting that the two-boxer would agree that you should submit one-boxing code because they agree that one-boxing is the rational agent type.
Running one-boxing code is analogous to showing Omega your decision algorithm and then deciding to one-box. If you think you should run code that one-boxes, then by analogy you should decide to one-box.
I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.