You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on Smoking lesion as a counterexample to CDT - Less Wrong Discussion

6 Post author: Stuart_Armstrong 26 October 2012 12:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread.

Comment author: V_V 26 October 2012 01:57:07PM *  8 points [-]

It's questionable whether the smoking lesion problem is a valid counterexample to EDT in the first place. It can be argued that the problem is underspecified, and it requires additional assumptions for EDT to determine an outcome:

A reasonable assumption is that the rare gene affects smoking only though its action on Susan's preferences: "Susan has the generic lesion" and "Susan smokes" are conditionally independent events given "Susan likes to smoke". Since the agent is assumed to know their own preferences, the decision to smoke given that Susan likes to smoke doesn't increase the probability that she has the genetic lesion, hence EDT correctly chooses "smoke".

But consider a different set of assumptions: an evil Omega examines Susan's embryo even before she is born, it determines whether she will smoke and if she will, it puts in her DNA an otherwise rare genetic lesion that will likely give her cancer but causes no other detectable effect.

Please note that this is not a variation of the smoking lesion problem, it's merely a specification which is still perfectly consistent with the original formulation: the genetic lesion is positively correlated both with smoking and cancer.

What decision does EDT choose in this case? It chooses "Don't smoke", and arguably correctly so, since that with these assumptions the problem is essentially a rephrasing of Newcom's problem where "Smoke" = "Two-box" and "Don't smoke" = "One-box".

Comment author: Stuart_Armstrong 26 October 2012 02:06:55PM 2 points [-]

It's questionable whether the smoking lesion problem is a valid counterexample to EDT in the first place. It can be argued that the problem is underspecified, and it requires additional assumptions for EDT to determine an outcome:

I agree.

Comment author: Alejandro1 26 October 2012 06:54:26PM *  0 points [-]

I agree with this analysis. The most interesting case is a third variation, in which there is no evil Omega, but the organic genetic lesion causes not only a preference for smoking but also weakness in resisting that preference, propensity for rationalizing yourself into smoking, etc. We can assume happens in such a way that "Susan actively chooses to smoke" is still new positive evidence to a third-party observer that Susan has the lesion, over and above the previous evidence provided by knowledge about Susan's preferences (and conscious reasonings, etc) before she actively makes the choice. I think in this case Susan should treat the case as a Newcomb problem and choose not to smoke, but it is less intuitive without an Omega calling the shots.

Comment author: Khoth 26 October 2012 06:58:31PM 0 points [-]

In that case she should still smoke. There's no causal arrow going from "choosing to smoke" to "getting cancer".

Comment author: Alejandro1 26 October 2012 07:01:03PM 0 points [-]

There is no causal arrow in Newcomb from choosing two boxes to the second one being empty.

Comment author: Vaniver 27 October 2012 01:16:43AM 0 points [-]

Functionally, there is; it's called "Omega is a perfect predictor."

Comment author: Alejandro1 27 October 2012 02:07:30AM 0 points [-]

See my reply to Khoth. You can call this a functional causal arrow if you want, but you can reanalyze it as a standard causal arrow from your original state to both your decision and (through Omega) the money. The same thing happens in my version of the smoking problem.

Comment author: Vaniver 27 October 2012 05:29:33AM *  0 points [-]

Suppose I'm a one-boxer, Omega looks at me, and is sure that I'm a one-boxer. But then, after Omega fills the boxes, Mentok comes by, takes control of me, and forces me to two-box. Is there a million dollars in the second box?

Comment author: Alejandro1 27 October 2012 06:18:27PM 2 points [-]

Er… yes? Assuming Omega could not foresee Mentok coming in and changing the situation? No, if he could foresee this, but then the relevant original state includes both me and Mentok. I'm not sure I see the point.

Let's take a step back, what are we discussing? I claimed that my version of the smoking problem in which the gene is correlated with your decision to smoke (not just with your preference for it) is like the Newcomb problem, and that if you are a one-bower in the latter you should not smoke in the former. My argument for this was that both cases are isomorphic in that there is an earlier causal node causing, through separate channels, both your decision and the payoff. What is the problem with this viewpoint?

Comment author: Vaniver 27 October 2012 07:42:26PM 0 points [-]

Er… yes? Assuming Omega could not foresee Mentok coming in and changing the situation? No, if he could foresee this, but then the relevant original state includes both me and Mentok. I'm not sure I see the point.

Then Omega is not a perfect predictor, and thus there's a contradiction in the problem statement.

My argument for this was that both cases are isomorphic in that there is an earlier causal node causing, through separate channels, both your decision and the payoff.

The strength of the connection between the causal nodes makes a big difference in practice. If the smoking gene doesn't make you more likely to smoke, but makes it absolutely certain that you will smoke- why represent those as separate nodes?

Comment author: Alejandro1 27 October 2012 08:03:32PM 1 point [-]

I am sorry, I cannot understand what you are getting at in either of your paragraphs.

In the first one, are you arguing that the original Newcomb problem is contradictory? The problem assumes that Omega can predict your behavior. Presumablythis is not done magically but by knowing your initial state and running some sort of simulation. Here the initial state is defined as everything that affects your choice (otherwise Omega wouldn't be accurate) so if there is a Mentok, his initial state is included as well. I fail to see any contradiction.

In the second one, I agree with "The strength of the connection between the causal nodes makes a big difference in practice." but fail to see the relevance (I would say we are assuming in these problems that the connection is very strong in both Newcomb and Smoking), and cannot parse at all your reasoning in the last sentence. Could you elaborate?

Comment author: Khoth 26 October 2012 07:13:35PM 0 points [-]

Maybe I worded it badly. What I meant was, in Newcomb's problem, Omega studies you to determine the decision you will make, and puts stuff in the boxes based on that. In the lesion problem, there's no mechanism by which the decision you make affects what genes you have.

Comment author: Alejandro1 26 October 2012 07:22:20PM 0 points [-]

Omega makes the prediction by looking at your state before setting the boxes. Let us call P the property of your state that is critical for his decision. It may be the whole microscopic state of your brain and environment, or it might be some higher-level property like "firm belief that one-boxing is the correct choice". In any case, there must be such a P, and it is from P that the causal arrow to the money in the box goes, not from your decision. Both your decision and the money in the box are correlated with P. Likewise, in my version of the smoking problem both your decision to smoke and cancer are correlated with the genetic lesion. So I think my version of the problem is isomorphic to Newcomb.