I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.
As the argument goes, you can't control your past selves, but that isn't the form of the experiment. The only self that you're controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer).
See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That's right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn't matter how he does it, all that matters is whether or not your current self decides to one box.
To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you're a one-boxer? In any simulation he could run of your brain, the simulated you could just know it's a simulation and then Omega wouldn't get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life.
The point? How do you know that current you isn't the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?