So, just a small observation about Newcomb's problem:

It does matter to me who the predictor is.

If it is a substantially magical Omega, that predicts without fail, I will onebox - gamble that my decision might in fact cause a million in that box somehow (via simulation, via timetravel, via some handwavy sciencefictiony quantum mechanical stuff where the box content is entangled with me, via quantum murder even (like quantum suicide), it does not matter). I don't need to change anything about myself - I will win, unless I was wrong about how predictions are done and Omega failed.

If it is a human psychologist, or equivalent - well in that case I should make up here some rationalization to one box which looks like I truly believe it. I'm not going to do that because I see utility of writing a better post here to be larger than utility of winning in a future Newcomb's game show that is exceedingly unlikely to happen.

The situation with a fairly accurate human psychologist is drastically different.

The psychologist may have to put nothing into box B because you did well on particular subset of a test you did decades ago, or nothing because you did poorly. He can do it based on your relative grades for particular problems back in elementary school. One thing he isn't doing, is replicating non-trivial, complicated computation that you do in your head (assuming those aren't a mere rationalization fitted to arrive at otherwise preset conclusion). He may have been correct with previous 100 subjects via combination of sheer luck with unwillingness of previous 100 participants to actually think about it on spot, rather than solve it via cached thoughts and memes, requiring mere lookup of their personal history (they might have complex after the fact rationalizations of that decision but those are irrelevant). You can't in advance make yourself 'win' this by adjusting your Newcomb paradox specific strategy. You would have to adjust your normal life. E.g. I may have to change content of this post to win future Newcomb's paradox. Even that may not work if the prediction is based to events that happened to you and which shaped the way you think.

New Comment
19 comments, sorted by Click to highlight new comments since:

If the psychologist was predicting you based off of a simple algorithm that only took your test scores as inputs, or something like that, you would be totally right.

But it starts to look a lot like Newcomb's problem if the psychologist is predicting you using an algorithm similar to the one you use to make the decision - in that case you should one-box.

But it starts to look a lot like Newcomb's problem if the psychologist is predicting you using an algorithm similar to the one you use to make the decision - in that case you should one-box.

Not necessarily, it's hard to say what you should actually do. Maybe the psychologist is gullible enough and you can succeed in getting both non-empty boxes.

So you put a probability on that and do an expected utility calculation.

(It's hard to say how to put a probability on that.)

The situation with a fairly accurate human psychologist is drastically different.

I'm not sure if there is a way of easily pinpointing the problem with your reasoning, but the TDT paper is probably thorough enough to resolve it. See also Manfred's comment: if the psychologist is "one level higher than you", your reasoning could already be taken into account, and depending on how you reason, you could receive different reward.

If it is a substantially magical Omega, that predicts without fail, I will onebox

I'm not so sure about myself.

Imagine a slight variation on the problem;

Suppose that Omega tells us his method of prediction; 2% of the time, he flips a coin. 98% of the time, he uses a time machine. (His method of randomness is not predictable by us.)

He then makes both boxes transparent.

In theory, it should be the same. If it was right to one box in the original, you should still one box in the transparent version. But it feels different.

I'm sure that if box B had $1,000,000 I'd be able to resist temptation and one box.

But if if box B is empty, would I really take just the empty box? I'm not sure.

As with most thought experiments of this sort, a lot depends on whether I imagine the set up as "someone tells me that they have a time machine, etc.", in which case what I do is assume the probability that they are in one form or another lying to me is signfiicant, and take the money I can see, or whether I imagine the set up as "I somehow come to believe with very high confidence that Omega has a time machine, is telling the truth, etc." in which case I suspect I one-box on a transparent empty box as well.

This feels crazy, certainly, but the craziness is embedded in the premise and I'm working out the consequences of that craziness.

I was assuming that Omega is a trustworthy agent.

If we're going to question Omega at all, why not question if he's actually going to make a prediction, or if there's going to be two boxes and not three, and how do we know Omega is actually as good as predicting as is claimed? I think the principle of Least Convenient Possible World applies. Assume an honesty for Omega that is as inconvenient as possible for your argument.

If any condition makes one boxing seem both crazy and correct, then there's more to be discovered about our reasoning process.

I'm guessing that it's the precommitment part of the problem that seems crazy. Suppose that to precommit to one boxing, you gave Omega your word that you would one box. Then when faced with the empty transparent box, you can make the less crazy seeming decision that not breaking your word is worth more than $1,000.

That seems rational to me - giving up your right to make a different decision in the future, even knowing there's a 2% chance it will be worse is worth less than the 98% chance of affecting Omega behavior. It's similar to giving your word to the driver in Parfit's Hitchhiker problem

The point I'm making is not about Omega's trustworthiness, but about my beliefs.

If Omega is trustworthy AND I'm confident that Omega is trustworthy, then I will one-box. The reason I will one-box is that it follows from what Omega has said that one-boxing is the right thing to do, and I believe that Omega is trustworthy. It feels completely bizarre to one-box, but that's because it's completely bizarre for me to believe that Omega is trustworthy; if I have already assumed the latter, then one-boxing follows naturally. It follows just as naturally with a transparent box, or with a box full of pit vipers, or with a revolver which I'm assured that, fired at my head, will net me a million dollars. If I'm confident that Omega's claims are true, I one-box (or fire the revolver, or whatever).

If Omega is not trustworthy AND I'm confident that Omega is trustworthy, then I will still one-box. It's just that in that far-more-ordinary scenario, doing so is a mistake.

I cannot imagine a mechanism whereby I become confident that Omega is trustworthy, but if the setup of the thought experiment presumes that I am confident, then what follows is that I one-box.

No precommitment is required. All I have to "precommit" to is to acting on the basis of what I believe to be true at the time. If that includes crazy-seeming beliefs about Omega, then the result will be crazy-seeming decisions. If those crazy-seeming beliefs are true, then the result will be crazy-seeming correct decisions.

I like the rec.puzzles answer

... In this case, the hidden assumption is that P(predict X | do X) is near unity, given that P(do X | predict X) is near unity. If this is so, then the one-box strategy is best; if not, then the two-box strategy is best.

I think knowing who is doing the predicting, or how the predicting is done affects the important probability; how likely is it that my choice affects the prediction. I'd have to agree, knowing who the predictor is matters.

upvote because more like this.

(this is a disclaimer) As far as I can tell... ' The difference between the two is that for omega it is stipulated that omega is always right but for the psychologist your evidence for this is that they got the last 100 right (or that they say so?)

I'm gonna ignore the case where you don't know they actually got 100 right.

Getting 100 in a row implies that they are more than "fairly accurate". Otherwise, maybe they did predictions till they happened to get 100 in a row then called you in or got really lucky.

Assuming, for convenience, that these 100 are the only 100, (and assuming you know this) they probably have a reliable way to predict your decision.

This may be because they're wizards exercising mind control, or just psychologists using priming, or hypnotists and so on: that is, their prediction may be aided by deliberately influencing your decision.

For now I'm gonna model this like they don't exert any influence.

Whether they do this by being omega or by memes and stalking their methods were very accurate for the 100 other people and so probably are for you.

If they predict two box you get 1000 if you two box and 0 if you one box. If they predict one box you get 1,001,000 if you two box and 1,000,000 if you one box.

If your newcomb's strategy can influence their decision by a 1/1000 being a one boxer dominates being a two boxer. The fact that they were right 100 times in a row before you means your newcomb's strategy probably has some influence. This isn't a given. Maybe their algorithm breaks down because you've thought of newcomb's problem, or are relevantly different from the previous 100 in a way that will break the prediction.

Ok so you probably agree with all that anyway. I was a little confused. But, if there were a significant number of one boxers in the 100 previous the psychologist correctly predicted their strategy. probably they did not try to deliberately signal that they were one boxers well before encountering the problem.

So, simply being a one-boxer probably reliably signals that you are a one boxer to the psychologist. And, deliberately signalling that you are makes you different from the people it worked on. You might throw the psychologist off.

Bringing the possibility that they are right because they influence people back, you should still one box (if you can) and probably make a big show of precommiting to doing it, because the psychologist might prefer being right to saving 999,000 with probability >0.001.

the signalling is probably only a bad idea if the psychologist is genuinely predicting stuff, but their algorithm is easily thrown off.

I always find it sad to see a thread downvoted with no comments or explanations, so I'm going to attempt to give my thoughts.

Newcomb's problem seems absurdly easy to me. At least in the way it was presented by Eliezer, which is not necessarily a universal formulation. The way he expressed it, you observe Omega predicting correctly n times. (You could even add inaccurate observations if you wanted to consider the possibility that Omega is accurate, say, 90% of the time. We will do this in later steps, and call the number of inaccurate observations m.) If one box contains A dollars (or C dollars, if Omega predicted you to two box) and the other box contains B dollars, you can arrive at a pretty easy formulation of whether or not you should one box or two box. I almost wrote a MATLAB program to do it for arbitrary inputs that I was going to make into a post, but I figured most people wouldn't find it very interesting, which was my conclusion after I got about halfway done with it.

First you arrive at a probability that Omega will predict you correctly, assuming that you are no different from anyone else with whom Omega has played the game. To do this, you estimate the accuracy, p, over a range of values from 0 to 1. 1.0 means Omega is perfectly accurate, 0.0 would mean ey is always wrong. The probability of obtaining the results you observed (n accurate predictions by Omega and m inaccurate ones) given any probability of em being accurate (p) is then p^n (1-p)^m. This gives us a distribution that represents the probability that Omega has a certain accuracy. We will call this distribution *D(p).

We then need to consider our two alternatives and select the one that maximizes expected utility. The utility of two boxing is:

U(two box) = p(Omega is wrong) (Value of box B + Value of Box A) + p(Omega is right) (Value of box B + Lesser value Omega puts in A)

U(one box) = p(Omega is wrong) (Value of box B + Lesser value Omega puts in A) + p(Omega is wrong) (Value of box B + Value of box A)

(Remember that we are considering the possibility that instead of replacing A with 0 dollars, Omega puts some value C dollars in A. All that really matters is the difference in these two values, though.)

With the variables we used, p(Omega is right) is the probability that ey has a certain accuracy, our distribution D(p) times that accuracy, p. p(Omega is wrong) is one minus this. The value of box A is obviously A, the value of box B is B, and the lesser value that Omega puts in box A is C

So our expected utilities are then a function of p as follows:

U(two box) = (1 - D(p)p)(B+A) + D(p)p(B+C) = [1 - (p^n (1-p)^m)] p (B+A) + (p^n (1-p)^m) p (B+C)

U(two box) = (1 - D(p)p)(B+C) + D(p)p(B+A) = [1 - (p^n (1-p)^m)] p (B+C) + (p^n (1-p)^m) p (B+A)

All that needs to be done is then to integrate the expected utilities over p from zero to one. Whichever value is greater is the correct choice.


Note that this analysis has a number of (fairly obvious and somewhat trivial) assumptions. One, the probability of Omega being right is constant over both one boxers and two boxers. Two, one's utility function in money is linear (although compensating for that would not be very difficult). Three, Omega has no more or less information about you than anyone else about whom ey made this prediction.

This looks like evidential decision theory, which gives the wrong answer in the Smoking Lesion problem.

(Here's a slightly less mind-killing variant: let's say that regularly taking aspirin is correlated with risk of a heart attack, but not because it causes them; in fact, aspirin (in this hypothetical) is good for anyone's heart. Instead, there's an additional risk factor for heart attacks, which also causes discomfort beneath the threshold of full consciousness. People with this risk factor end up being more likely to take aspirin regularly, though they're not able to pinpoint why, and the effect is large enough that the correlation points the "wrong" way. Now if you know all of this and are wondering whether to take aspirin regularly, the calculation you did above would tell you not to take it!)

We can get down to a discussion of evidential vs casual decision theory if you want, certainly, but I think that's a bit off topic.

I have a couple of reactions to your point. My initial reaction is that evidential decision theory is superior in the case of Omega because nothing is known about em. Since Omega is a black box, the only thing that can really be done is gather evidence and respond to it.

But more generally, I think your example is somewhat strawman-ish. Just like in the smoking problem, there is other evidence suggesting that Asprin has the opposite effect. Saying that evidential decision theory has to ignore this is pretty unfair to it. Furthermore, you know that you can't really rely on the evidence you have (that Asprin is correlated with heart attack, because you know you don't have a random sample. Moreover, the evidence that Asprin is actually good for your heart was supposedly generated with some kind of statistical controls. It's the same reason I mentioned in my analysis the assumption that Omega knows nothing more or less about you than ey did about any of the other people. The second you don't have a representative sample, all of your statistics can be thrown out of the window.

If the goal is a simple analysis, why not this;

Let average_one_box_value = the average value received by people who chose one box.
Let average_two_box_value = the average value received by people who chose two boxes.

If average_one_box_value > average_two_box_value, then pick one box, else pick two.

As a bonus, this eliminates the need to assume Omega being right is constant over both one boxers and two boxers.

[Edit - just plain wrong, see Misha's comment below] Minor quibble; It's also not necessary to assume linear utility for dollars, just continuous. That is, more money is always better. However, I'm pretty sure that's true in your example as well.

[-][anonymous]30

It is definitely necessary to assume linear utility for dollars. For example: suppose your (marginal) utility function for money is U($0) = 0, U($1000) = 1, U($1000000) = 2 (where $1000 and $1000000 are the amounts of money that could be in the two boxes, respectively). Furthermore, suppose Omega always correctly predicts two-boxers, so they always get $1000. However, Omega is very pessimistic about one-boxers, so only 0.2% of them get $1000000, and the average one-box value ends up being $2000.

It is then not correct to say that you should one-box. For you, the expected utility of two-boxing is exactly 1, but the expected utility of one-boxing is 0.2% x 2 = 0.004, and so one-boxing is a really stupid strategy even though the expected monetary gain is twice as high.

Edit: of course, there's an obvious fix: compute the average utility received by people, according to your utility function, and optimize over that.

It's been awhile, but isn't that essentially what I did?

That was my goal, the same but less verbose. and without needing to factor out probabilities that are later factored back in.

My question was unclear, let me try again; (Why) is it necessary to go through all the work to arrive at a probability that Omega will predict you correctly?

[edit question: is there anyway to do strike-through text in markdown? Or embed html tags?]

[-]Shmi-30

Upvoted for daring to revisit a rather controversial issue.