Newcomb's Problem is effectively a problem about pre-commitment. Everyone agrees that if you have the opportunity to pre-commit in advance of Omega predicting you, then you ought to. The only question is what you ought to do if you either failed to do this or weren't given the opportunity to do this. LW-Style Decision Theories like TDT or UDT say that you should act as though you are pre-committed, while Casual Decision Theory says that it's too late.
Formal pre-commitments include things like rewriting your code, signing a legally binding contract, or providing assets as security. If set up correctly, they ensure that a rational agent actually keeps their end of the bargain. Of course, an irrational agent may still break their end of the bargain.
Effective pre-commitment describes any situation where an agent must (in the logical sense) necessarily perform an action in the future, even if there is no formal pre-commitment. If libertarian free will were to exist, then no one would ever be effectively pre-committed, but if the universe is deterministic, then we are effectively pre-committed to any choice that we make (quantum mechanics effectively pre-commits us to particular probability distributions, rather than individual choices, but for purposes of simplicity we will ignore this here and just assume straightforward determinism). This follows straight from the definition of determinism (more discussion about the philosophical consequences of determinism in a previous post).
One reason why this concept seems so weird is that there's absolutely no need for an agent that's effectively pre-committed to know that it is pre-committed until the exact moment when it locks in its decision. From the agent's perspective, it magically turns out to be pre-committed to whatever action it chooses, however, the truth is that the agent was always pre-committed to this action, just without knowing.
Much of the confusion about pre-commitment is about whether we should be looking at formal or effective pre-commitment. Perfect predictors only care about effective pre-commitment; for them formalities are unnecessary and possibly misleading. However, human-level agents tend to care much more about formal pre-commitments. Some people, like detectives or poker players, may be really good at reading people, but they're still nothing compared to a perfect predictor and most people aren't even this good. So in everyday life, we tend to care much more about formal pre-commitments when we want certainty.
However, Newcomb's Problem explicitly specifies a perfect predictor, so we shouldn't be thinking about human level predictors. In fact, I'd say that some of the emphasise on formal pre-commitment comes from anthropomorphizing perfect predictors. It's really hard for us to accept that anyone or anything could actually be that good and that there's no way to get ahead of it.
In closing, differentiating the two kinds of pre-commitment really clarifies these kinds of discussions. We may not be able to go back into the past and pre-commit to a certain cause of action, but we can take an action on the basis that it would be good if we had pre-committed to it and be assured that we will discover that we were actually pre-committed to it.
Not sure why people find the Newcomb's problem so complicated, it is pretty trivial: you one-box: you win, you two-box: you lose. Doesn't matter when you feel you have made the decision, either, what maters is the decision itself. The confusion arises when people try to fight the hypothetical by assuming an impossible world where they can fool a perfect predictor has a non-zero probability of becoming an actual world.
No, it cannot. What you are doing in a self-consistent model is something else. As jessicata and I discussed elsewhere on this site, What we observe is a macrostate, and there are many microstates corresponding to the same macrostate. The "different past" means a state of the world in a different microstate than in the past, while in the same macrostate as in the past. So there is no such thing as a counterfactual. the "would have been" means a different microstate. In that sense it is no different from the state observed in present or in the future.