Psychohistorian3 comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Psychohistorian3 31 January 2008 11:08:06PM 22 points [-]

This dilemma seems like it can be reduced to: 1. If you take both boxes, you will get $1000 2. If you only take box B, you will get $1M Which is a rather easy decision.

There's a seemingly-impossible but vital premise, namely, that your action was already known before you acted. Even if this is completely impossible, it's a premise, so there's no point arguing it.

Another way of thinking of it is that, when someone says, "The boxes are already there, so your decision cannot affect what's in them," he is wrong. It has been assumed that your decision does affect what's in them, so the fact that you cannot imagine how that is possible is wholly irrelevant.

In short, I don't understand how this is controversial when the decider has all the information that was provided.

Comment author: Kenny 02 February 2013 07:27:16PM 1 point [-]

Actually, we don't know that our decision affects the contents of Box B. In fact, we're told that it contains a million dollars if-and-only-if Omega predicts we will only take Box B.

It is possible that we could pick Box B even tho Omega predicted we would take both boxes. Omega has only observed to have predicted correctly 100 times. And if we are sufficiently doubtful whether Omega would predict that we would take only Box B, it would be rational to take both boxes.

Only if we're somewhat confident of Omega's prediction can we confidently one-box and rationally expect it to contain a million dollars.

Comment author: someonewrongonthenet 19 June 2013 06:38:29PM *  1 point [-]

You're saying that we live in a universe where Newcomb's problem is impossible because the future doesn't effect the past. I'll re-phrase this problem in such a way that it seems plausible in our universe:

I've got really nice scanning software. I scan your brain down to the molecule, and make a virtual representation of it on a computer. I run virtual-you in my software, and give virtual-you Newcomb's problem. Virtual-you answers, and I arrange my boxes according to that answer.

I come back to real-you. You've got no idea what's going on. I explain the scenario to you and I give you Newcomb's problem. How do you answer?

This particular instance of the problem does have an obvious, relatively uncomplicated solution: Lbh unir ab jnl bs xabjvat jurgure lbh ner rkcrevrapvat gur cneg bs gur fvzhyngvba, be gur cneg bs gur syrfu-naq-oybbq irefvba. Fvapr lbh xabj gung obgu jvyy npg vqragvpnyyl, bar-obkvat vf gur fhcrevbe bcgvba.

If for any reason you suspect that the Predictor can reach a sufficient level of accuracy to justify one-boxing, you one box. It doesn't matter what sort of universe you are in.

Comment author: answer 19 June 2013 06:53:19PM *  2 points [-]

Not that I disagree with the one-boxing conclusion, but this formulation requires physically reducible free will (which has recently been brought back into discussion). It would also require knowing the position and momentum of a lot of particles to arbitrary precision, which is provably impossible.

Comment author: someonewrongonthenet 19 June 2013 07:56:30PM *  4 points [-]

We don't need a perfect simulation for the purposes of this problem in the abstract - we just need a situation such that the problem-solver assigns better-than-chance predicting power to the Predictor, and a sufficiently high utility differential between winning and losing.

The "perfect whole brain simulation" is an extreme case which keeps things intuitively clear. I'd argue that any form of simulation which performs better than chance follows the same logic.

The only way to escape the conclusion via simulation is if you know something that Omega doesn't - for example, you might have some secret external factor modify your "source code" and alter your decision after Omega has finished examining you. Beating Omega essentially means that you need to keep your brain-state in such a form that Omega can't deduce that you'll two-box.

As Psychohistorian3 pointed out, the power that you've assigned to Omega predicting accurately is built into the problem. Your estimate of the probability that you will succeed in deception via the aforementioned method or any other is fixed by the problem.

In the real world, you are free to assign whatever probability you want to your ability to deceive Omega's predictive mechanisms, which is why this problem is counter intuitive.

Comment author: Eliezer_Yudkowsky 19 June 2013 08:29:38PM 5 points [-]

Also: You can't simultaneously claim that any rational being ought to two-box, this being the obvious and overdetermined answer, and also claim that it's impossible for anyone to figure out that you're going to two-box.

Comment author: answer 19 June 2013 08:32:58PM 2 points [-]

Right, any predictor with at least a 50.05% accuracy is worth one-boxing upon (well, maybe a higher percentage for those with concave functions in money). A predictor with sufficiently high accuracy that it's worth one-boxing isn't unrealistic or counterintuitive at all in itself, but it seems (to me at least) that many people reach the right answer for the wrong reason: the "you don't know whether you're real or a simulation" argument. Realistically, while backwards causality isn't feasible, neither is precise mind duplication. The decision to one-box can be rationally reached without those reasons: you choose to be the kind of person to (predictably) one-box, and as a consequence of that, you actually do one-box.

Comment author: someonewrongonthenet 19 June 2013 08:48:37PM *  2 points [-]

Oh, that's fair. I was thinking of "you don't know whether you're real or a simulation" as an intuitive way to prove the case for all "conscious" simulations. It doesn't have to be perfect - you could just as easily be an inaccurate simulation, with no way to know that you are a simulation and no way to know that you are inaccurate with respect to an original.

I was trying to get people to generalize downwards from the extreme intuitive example- Even with decreasing accuracy, as the simulation becomes so rough as to lose "consciousness" and "personhood", the argument keeps holding.

Comment author: answer 19 June 2013 09:01:43PM 3 points [-]

Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn't so much that the simulation isn't going to be accurate enough, but that a simulation argument shouldn't be a prerequisite to one-boxing. If the experiment were performed with human predictors (let's say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you're not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.

Comment author: someonewrongonthenet 19 June 2013 10:09:07PM *  2 points [-]

The trouble with real world examples is that we start introducing knowledge into the problem that we wouldn't ideally have. The psychologist's 75% success rate doesn't necessarily apply to you - in the real world you can make a different estimate than the one that is given. If you're an actor or a poker player, you'll have a much different estimate of how things are going to work out.

Psychologists are just messier versions of brain scanners - the fundamental premise is that they are trying to access your source code.

And what's more - suppose the predictions weren't made by accessing your source code? The direction of causality does matter. If Omega can predict the future, the causal lines flow backwards from your choice to Omega's past move. If Omega is scanning your brain, the causal lines go from your brain-state to Omega's decision. If there are no causal lines between your brain/actions and Omega's choice, you always two-box.

Real world example: what if I substituted your psychologist for a sociologist, who predicted you with above-chance accuracy using only your demographic factors? In this scenario, you aught to two-box - If you disagree, let me know and I can explain myself.

In the real world, you don't know to what extent your psychologist is using sociology (or some other factor outside your control). People can't always articulate why, but their intuition (correctly) begins to make them deviate from the given success% estimate as more of these real-world variables get introduced.

Comment author: answer 19 June 2013 10:29:17PM 1 point [-]

True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limited predictor problem, the predictor doesn't use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.

Comment author: someonewrongonthenet 19 June 2013 10:36:38PM *  2 points [-]

agreed. To add on to this:

predictor doesn't use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.

It's worth pointing out that Newcomb's problem always takes the form of Simpson's paradox. The one boxers beat the two boxers as a whole, but among agents predicted to one-box, the two boxers win, and among agents predicted to two-box, the two boxers win.

The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect Omega's prediction. The general rule is: "Try to make Omega think you're one-boxing, but two-box whenever possible." It's just that in Newcomb's problem proper, fulfilling the first imperative requires actually one-boxing.

Comment author: Decius 19 June 2013 09:24:20PM 1 point [-]

any predictor with at least a 50.05% accuracy is worth one-boxing upon

Assuming that you have no information other than the base rate, and that it's equally likely to be wrong either way.

Comment author: Nornagest 19 June 2013 07:19:33PM 1 point [-]

Another way of thinking of it is that, when someone says, "The boxes are already there, so your decision cannot affect what's in them," he is wrong. It has been assumed that your decision does affect what's in them, so the fact that you cannot imagine how that is possible is wholly irrelevant.

Your decision doesn't affect what's in the boxes, but your decision procedure does, and that already exists when the question's being assigned. It may or may not be possible to derive your decision from the decision procedure you're using in the general case -- I haven't actually done the reduction, but at first glance it looks cognate to some problems that I know are undecidable -- but it's clearly possible in some cases, and it's at least not completely absurd to imagine an Omega with a very high success rate.

As best I can tell, most of the confusion here comes from a conception of free will that decouples the decision from the procedure leading to it.

Comment author: TheOtherDave 19 June 2013 07:49:38PM 1 point [-]

most of the confusion here comes from a conception of free will that decouples the decision from the procedure leading to it.

Yeah, agreed. I often describe this as NP being more about what kind of person I am than it is about what decision I make, but I like your phrasing better.