So, just a small observation about Newcomb's problem:
It does matter to me who the predictor is.
If it is a substantially magical Omega, that predicts without fail, I will onebox - gamble that my decision might in fact cause a million in that box somehow (via simulation, via timetravel, via some handwavy sciencefictiony quantum mechanical stuff where the box content is entangled with me, via quantum murder even (like quantum suicide), it does not matter). I don't need to change anything about myself - I will win, unless I was wrong about how predictions are done and Omega failed.
If it is a human psychologist, or equivalent - well in that case I should make up here some rationalization to one box which looks like I truly believe it. I'm not going to do that because I see utility of writing a better post here to be larger than utility of winning in a future Newcomb's game show that is exceedingly unlikely to happen.
The situation with a fairly accurate human psychologist is drastically different.
The psychologist may have to put nothing into box B because you did well on particular subset of a test you did decades ago, or nothing because you did poorly. He can do it based on your relative grades for particular problems back in elementary school. One thing he isn't doing, is replicating non-trivial, complicated computation that you do in your head (assuming those aren't a mere rationalization fitted to arrive at otherwise preset conclusion). He may have been correct with previous 100 subjects via combination of sheer luck with unwillingness of previous 100 participants to actually think about it on spot, rather than solve it via cached thoughts and memes, requiring mere lookup of their personal history (they might have complex after the fact rationalizations of that decision but those are irrelevant). You can't in advance make yourself 'win' this by adjusting your Newcomb paradox specific strategy. You would have to adjust your normal life. E.g. I may have to change content of this post to win future Newcomb's paradox. Even that may not work if the prediction is based to events that happened to you and which shaped the way you think.
The point I'm making is not about Omega's trustworthiness, but about my beliefs.
If Omega is trustworthy AND I'm confident that Omega is trustworthy, then I will one-box. The reason I will one-box is that it follows from what Omega has said that one-boxing is the right thing to do, and I believe that Omega is trustworthy. It feels completely bizarre to one-box, but that's because it's completely bizarre for me to believe that Omega is trustworthy; if I have already assumed the latter, then one-boxing follows naturally. It follows just as naturally with a transparent box, or with a box full of pit vipers, or with a revolver which I'm assured that, fired at my head, will net me a million dollars. If I'm confident that Omega's claims are true, I one-box (or fire the revolver, or whatever).
If Omega is not trustworthy AND I'm confident that Omega is trustworthy, then I will still one-box. It's just that in that far-more-ordinary scenario, doing so is a mistake.
I cannot imagine a mechanism whereby I become confident that Omega is trustworthy, but if the setup of the thought experiment presumes that I am confident, then what follows is that I one-box.
No precommitment is required. All I have to "precommit" to is to acting on the basis of what I believe to be true at the time. If that includes crazy-seeming beliefs about Omega, then the result will be crazy-seeming decisions. If those crazy-seeming beliefs are true, then the result will be crazy-seeming correct decisions.