Here's an edited version of a puzzle from the book "Chuck Klosterman four" by Chuck Klosterman.
It is 1933. Somehow you find yourself in a position where you can effortlessly steal Adolf Hitler's wallet. The theft will not effect his rise to power, the nature of WW2, or the Holocaust. There is no important identification in the wallet, but the act will cost Hitler forty dollars and completely ruin his evening. You don't need the money. The odds that you will be caught committing the crime are negligible. Do you do it?
When should you punish someone for a crime they will commit in the future? Discuss.
Ok, it's like CM: right now (before Omega shows up) I want to be the kind of person who always one-boxes even if the box is empty, so that I'll never get an empty box. That is the rational and correct choice now.
This is not, however, the same thing as saying that the rational choice for someone staring at an empty B box is to one-box. It's a scenario that will never materialise if you don't screw up, but if you take it as the hypothesis that you do find yourself in that scenario (because, for example, you weren't rational before meeting Omega, but became perfectly rational afterwards), the rational answer for that scenario is to two-box. Yes, it does mean you screwed up by not wanting it sincerely enough, but it's the question that assumes you've already screwed up.
Translating this to the pre-punishment scenario, what this means is that - assuming a sufficient severity of average pre-punishment - a rational person will not want to ever become a criminal. So a rational person will never be pre-punished anyway. But if Normal_Anomaly asks: "Assume that you've been pre-punished; should you then commit crimes?" the answer is "Yes, but note that your hypothesis can only be true if I hadn't been perfectly rational in the past".
(Separate observation: Omega puts a million dollars in B iff you will one-box. Omega then reads my strategy: "if box B is opaque or transparent and full, I will one-box; if it is transparent and empty, I two-box". If box B is opaque, this forces Omega to put the money there. But if B is transparent, Omega will be right no matter what. Are we authorised to assume that Omega will choose to flip a coin in this scenario, or should we just say that the problem isn't well-posed for a transparent box? I'm leaning towards the latter. If the box is transparent and your choice is conditional on its content, you've effectively turned Omega's predictive ability against itself. You'll one-box iff Omega puts a million dollar there iff you one-box, loop.)
Transparent Newcomb is well-posed but, I admit, underspecified. So add this rule: