Psychohistorian comments on Confusion about Newcomb is confusion about counterfactuals - Less Wrong

35 Post author: AnnaSalamon 25 August 2009 08:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Psychohistorian 25 August 2009 10:41:33PM *  5 points [-]

It's an objection to Newcomb's specifically, not cause or decision theory generally. My position may be a bit too complex for a comment, but here's the gist.

Newcomb's assumes that deciding A will result in universe X, and deciding B will result in universe Y. It uses the black box of Omega's prediction process to forbid us from calling the connection causal, thus preventing CDT from working, but it requires that our decision be causal, because if it weren't there would be no reason not to two-box. Thus, it assumes causation but prohibits us from calling it causation. If we actually understood how our choosing to pick up the opaque box would result in it being empty, the problem would be entirely trivial. Thus, Newcomb's disproves CDT by assuming causation-that-is-not-causation, and such an assumption does not seem to actually prove anything about the world.

The smoking lesion problem has the same flaw in reverse. It requires EDT to assume that Susan's choice is relevant to whether she gets cancer, but it also assumes that Susan's choice is not relevant to her getting cancer. This linguistic doublethink is all that makes the problem difficult.

In Newcomb's, a full understanding of how Omega's prediction works should make the problem trivial, because it could be incorporated into CDT. If we don't assume that it does work, the problem doesn't work; there's no reason not to use CDT if Omega can't predict systematically. In the Smoking Lesion, a proper understanding of the cocorrelate that actually does cause cancer would make the problem doable in EDT, since it would be obvious that her chance of getting cancer is independent of her choice to smoke. If we don't assume that such a cocorrelate exists the problem doesn't work; EDT says Susan shouldn't smoke, which basically makes sense if the correlation has a meaningful chance of being causal. This is what I mean by it's a linguistic problem; language allows us to express these examples with no apparent contradiction, but the contradiction is there if we break it down far enough.

Comment author: orthonormal 25 August 2009 11:13:03PM *  1 point [-]

What if we ran a contest of decision theories on Newcomb's problem in a similar fashion to Axelrod's test of iterated PD programs? I (as Omega) would ask you to submit an explicit deterministic program X that's going to face a gauntlet of simple decision theory problems (including some Newcomb-like problems), and the payoffs it earns will be yours at the end.

In this case, I don't think you'd care (for programming purposes) whether I analyze X mathematically to figure out whether it 1- or 2-boxes, or whether I run simulations of X to see what it does, or anything else, so long as you have confidence that I will accurately predict X's choices (and play honestly as Omega). And I'm confident that if the payoffs are large enough to matter to you, you will not submit a CDT program or any 2-boxing program.

So it seems to me that the 'linguistic confusion' you face might have more to do with the way your current (actual, horrendously complicated) decision process feels from inside than with an inherent contradiction in Newcomb's Problem.

Comment author: Psychohistorian 26 August 2009 01:17:51AM *  1 point [-]

going to face a gauntlet of simple decision theory problems (including some Newcomb-like problems)

This is the issue. I suspect that Newcomb-like problems aren't meaningfully possible. Once you "explain" the problem to a machine, its choice actually causes the box to be full or empty. Omega's prediction functions as causation-without-being-causal, which makes some sense to our minds, but does not seem like it would be understood by a machine. In other words, the reason CDT does not work for a machine is because you have the inputs wrong, not the algorithm. A machine that interpreted information correctly would understand its actions as causal even if it didn't know how they did so, because it's a key assumption of the problem that they are functionally causal. If the program does not have that key assumption available to it, it should rationally two box, so it's totally unsurprising that prohibiting it from "understanding" the causal power of its decision results in it making the wrong decision.

Your counterexample is also problematic because I understand your prediction mechanism; I know how you will analyze my program, though there's some small chance you'll read the code wrong and come to the wrong conclusion, much like there's some chance Omega gets it wrong. Thus, there's a directly apparent causal connection between the program's decision to one-box and you putting the money in that box. CDT thus appears to work, since "program one-boxes" directly causes one-boxing to be the correct strategy. In order to make CDT not work, you'd need to arbitrarily prevent the program from incorporating this fact. And, if I were really, really smart (and if I cared enough), I'd design a program that you would predict would one-box, but actually two-boxed when you put it to the test. That is the winningest strategy possible (if it is actually possible); the only reason we never consider it with Omega is because it's assumed it wouldn't work.

Comment author: byrnema 26 August 2009 03:17:31AM *  4 points [-]

At this moment, I agree with Psychohistorian that the apparent conundrum is a result of forcing a distinction about causality when there really isn't one.

On the one hand, we say that the contents of the boxes are not directly, causally related to our choice to one box or two box. (We assert this, I suppose, because of the separation in time between the events, where the boxes are filled before we make our choice.)

On the other hand, we say that Omega can predict with great accuracy what we choose. This implies two things: our decision algorithm for making the choice is pre-written and deterministic, and Omega has access to our decision making algorithm.

Omega bases the contents of the box on the output of our decision making algorithm (that he simulates at time (t-y)) so the contents of the box are directly, causally related to the output of our decision algorithm.

Seems wrong to say that the contents of the box are not causally related to the output of our decision algorithm at time t (i.e., our choice), but are causally related to the output of the decision algorithm at time (t-y) -- even though the decision algorithm is deterministic and hasn't changed.

In a deterministic system in which information isn't lost as time progresses, then the time separation between events (positive or negative) makes no difference to the causality ... "a causes b" if b depends on a (even if b happens before a). For example, afternoon rain will cause me to bring my umbrella in the morning, in an information-complete system.

Later edit: This represents the location in {comment space}-time where (I think) I've understood the solution to Newcomb's problem, in the context of the substantial clues found here on LW. I had another comment in this thread explaining my solution that I've deleted. I don't want to distract from Anna's sequence (and I predict the usual philosophical differences) but I've kept my deleted comment in case there are more substantial differences.

I would say that the ambiguity/double think about causality is actually the feature of Newcomb's problem that helps us reduce what causality is.

Comment author: AnnaSalamon 26 August 2009 08:17:32PM 1 point [-]

This represents the location in {comment space}-time where (I think) I've understood the solution to Newcomb's problem, in the context of the substantial clues found here on LW. I had another comment in this thread explaining my solution that I've deleted. I don't want to distract from Anna's sequence

I'd say go ahead and distract. I'd love to see your solution.

Comment author: byrnema 27 August 2009 11:58:59AM 0 points [-]

How about if I send you my solution as a message? You can let me know if I'm on the right track or not...

Comment author: AndyWood 28 August 2009 02:45:28AM *  5 points [-]

Of all the comments in this block, byrnema's seems the most on-track, having the most ingredients of the solution, in my view. A few points:

I prefer to suppose that Omega has a powerful, detailed model of the local world, or whatever parts of the universe are ultimately factors in Joe's decision. It isn't just the contents of Joe's brain. Omega's track record is strong evidence that his model takes enough into account.

I do not see any backwards-in-time causality in this problem at all. That Joe's state causes both Omega's prediction and Joe's choice is not the same as the choice causing the prediction.

In fact, that's what seems wrong to me about most of the other comments right here. People keep talking about the choice causing something, but the problem says nothing about this at all. Joe's choice doesn't need to cause anything. Instead, Joe's choice and Omega's (prediction->money-hiding) have common causes.

The way I see it, the sleight-of-hand in this problem occurs when we ask what Joe "should" do. I think focusing on Joe's choice leads people to imagine that the choice is free in the sense of being unconnected to Omega's prediction (since the prediction has already happened). But it is not unconnected, because our choices are not un-caused. Neither are they connected backwards-in-time. Omega's actions and Joe's choice are connected because they share common causes.

EDIT: To make this a bit more concrete: Make this a question of what you "should" do if you meet Omega someday. Consider that your decision might be highly influenced by all the musings on the blog, or on Eliezer's or another poster's arguments. If these arguments convince you that you should one-box, then they also cause Omega to predict that you'll one-box. If these arguments fail to convince you, then that circumstance also causes Omega to predict you will two-box.

You've got to resist thinking of the machinery of human decision-making as primary or transcendent. See Thou Art Physics.