So, just a small observation about Newcomb's problem:
It does matter to me who the predictor is.
If it is a substantially magical Omega, that predicts without fail, I will onebox - gamble that my decision might in fact cause a million in that box somehow (via simulation, via timetravel, via some handwavy sciencefictiony quantum mechanical stuff where the box content is entangled with me, via quantum murder even (like quantum suicide), it does not matter). I don't need to change anything about myself - I will win, unless I was wrong about how predictions are done and Omega failed.
If it is a human psychologist, or equivalent - well in that case I should make up here some rationalization to one box which looks like I truly believe it. I'm not going to do that because I see utility of writing a better post here to be larger than utility of winning in a future Newcomb's game show that is exceedingly unlikely to happen.
The situation with a fairly accurate human psychologist is drastically different.
The psychologist may have to put nothing into box B because you did well on particular subset of a test you did decades ago, or nothing because you did poorly. He can do it based on your relative grades for particular problems back in elementary school. One thing he isn't doing, is replicating non-trivial, complicated computation that you do in your head (assuming those aren't a mere rationalization fitted to arrive at otherwise preset conclusion). He may have been correct with previous 100 subjects via combination of sheer luck with unwillingness of previous 100 participants to actually think about it on spot, rather than solve it via cached thoughts and memes, requiring mere lookup of their personal history (they might have complex after the fact rationalizations of that decision but those are irrelevant). You can't in advance make yourself 'win' this by adjusting your Newcomb paradox specific strategy. You would have to adjust your normal life. E.g. I may have to change content of this post to win future Newcomb's paradox. Even that may not work if the prediction is based to events that happened to you and which shaped the way you think.
This looks like evidential decision theory, which gives the wrong answer in the Smoking Lesion problem.
(Here's a slightly less mind-killing variant: let's say that regularly taking aspirin is correlated with risk of a heart attack, but not because it causes them; in fact, aspirin (in this hypothetical) is good for anyone's heart. Instead, there's an additional risk factor for heart attacks, which also causes discomfort beneath the threshold of full consciousness. People with this risk factor end up being more likely to take aspirin regularly, though they're not able to pinpoint why, and the effect is large enough that the correlation points the "wrong" way. Now if you know all of this and are wondering whether to take aspirin regularly, the calculation you did above would tell you not to take it!)
We can get down to a discussion of evidential vs casual decision theory if you want, certainly, but I think that's a bit off topic.
I have a couple of reactions to your point. My initial reaction is that evidential decision theory is superior in the case of Omega because nothing is known about em. Since Omega is a black box, the only thing that can really be done is gather evidence and respond to it.
But more generally, I think your example is somewhat strawman-ish. Just like in the smoking problem, there is other evidence suggesting that Asprin has the oppo... (read more)