So, just a small observation about Newcomb's problem:
It does matter to me who the predictor is.
If it is a substantially magical Omega, that predicts without fail, I will onebox - gamble that my decision might in fact cause a million in that box somehow (via simulation, via timetravel, via some handwavy sciencefictiony quantum mechanical stuff where the box content is entangled with me, via quantum murder even (like quantum suicide), it does not matter). I don't need to change anything about myself - I will win, unless I was wrong about how predictions are done and Omega failed.
If it is a human psychologist, or equivalent - well in that case I should make up here some rationalization to one box which looks like I truly believe it. I'm not going to do that because I see utility of writing a better post here to be larger than utility of winning in a future Newcomb's game show that is exceedingly unlikely to happen.
The situation with a fairly accurate human psychologist is drastically different.
The psychologist may have to put nothing into box B because you did well on particular subset of a test you did decades ago, or nothing because you did poorly. He can do it based on your relative grades for particular problems back in elementary school. One thing he isn't doing, is replicating non-trivial, complicated computation that you do in your head (assuming those aren't a mere rationalization fitted to arrive at otherwise preset conclusion). He may have been correct with previous 100 subjects via combination of sheer luck with unwillingness of previous 100 participants to actually think about it on spot, rather than solve it via cached thoughts and memes, requiring mere lookup of their personal history (they might have complex after the fact rationalizations of that decision but those are irrelevant). You can't in advance make yourself 'win' this by adjusting your Newcomb paradox specific strategy. You would have to adjust your normal life. E.g. I may have to change content of this post to win future Newcomb's paradox. Even that may not work if the prediction is based to events that happened to you and which shaped the way you think.
I always find it sad to see a thread downvoted with no comments or explanations, so I'm going to attempt to give my thoughts.
Newcomb's problem seems absurdly easy to me. At least in the way it was presented by Eliezer, which is not necessarily a universal formulation. The way he expressed it, you observe Omega predicting correctly n times. (You could even add inaccurate observations if you wanted to consider the possibility that Omega is accurate, say, 90% of the time. We will do this in later steps, and call the number of inaccurate observations m.) If one box contains A dollars (or C dollars, if Omega predicted you to two box) and the other box contains B dollars, you can arrive at a pretty easy formulation of whether or not you should one box or two box. I almost wrote a MATLAB program to do it for arbitrary inputs that I was going to make into a post, but I figured most people wouldn't find it very interesting, which was my conclusion after I got about halfway done with it.
First you arrive at a probability that Omega will predict you correctly, assuming that you are no different from anyone else with whom Omega has played the game. To do this, you estimate the accuracy, p, over a range of values from 0 to 1. 1.0 means Omega is perfectly accurate, 0.0 would mean ey is always wrong. The probability of obtaining the results you observed (n accurate predictions by Omega and m inaccurate ones) given any probability of em being accurate (p) is then p^n (1-p)^m. This gives us a distribution that represents the probability that Omega has a certain accuracy. We will call this distribution *D(p).
We then need to consider our two alternatives and select the one that maximizes expected utility. The utility of two boxing is:
U(two box) = p(Omega is wrong) (Value of box B + Value of Box A) + p(Omega is right) (Value of box B + Lesser value Omega puts in A)
U(one box) = p(Omega is wrong) (Value of box B + Lesser value Omega puts in A) + p(Omega is wrong) (Value of box B + Value of box A)
(Remember that we are considering the possibility that instead of replacing A with 0 dollars, Omega puts some value C dollars in A. All that really matters is the difference in these two values, though.)
With the variables we used, p(Omega is right) is the probability that ey has a certain accuracy, our distribution D(p) times that accuracy, p. p(Omega is wrong) is one minus this. The value of box A is obviously A, the value of box B is B, and the lesser value that Omega puts in box A is C
So our expected utilities are then a function of p as follows:
U(two box) = (1 - D(p)p)(B+A) + D(p)p(B+C) = [1 - (p^n (1-p)^m)] p (B+A) + (p^n (1-p)^m) p (B+C)
U(two box) = (1 - D(p)p)(B+C) + D(p)p(B+A) = [1 - (p^n (1-p)^m)] p (B+C) + (p^n (1-p)^m) p (B+A)
All that needs to be done is then to integrate the expected utilities over p from zero to one. Whichever value is greater is the correct choice.
Note that this analysis has a number of (fairly obvious and somewhat trivial) assumptions. One, the probability of Omega being right is constant over both one boxers and two boxers. Two, one's utility function in money is linear (although compensating for that would not be very difficult). Three, Omega has no more or less information about you than anyone else about whom ey made this prediction.
If the goal is a simple analysis, why not this;
Let average_one_box_value = the average value received by people who chose one box.
Let average_two_box_value = the average value received by people who chose two boxes.
If average_one_box_value > average_two_box_value, then pick one box, else pick two.
As a bonus, this eliminates the need to assume Omega being right is constant over both one boxers and two boxers.
[Edit - just plain wrong, see Misha's comment below] Minor quibble; It's also not necessary to assume linear utility for dollars, just continuous. That is, more money is always better. However, I'm pretty sure that's true in your example as well.