You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ike comments on Newcomb versus dust specks - Less Wrong Discussion

-1 Post author: ike 12 May 2016 03:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: ike 13 May 2016 07:33:27PM *  0 points [-]

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

As long as the predictor doesn't cause anything but merely informs, they're equivalent to the gene. The reason why one-boxing is correct is because your choice causes the money, while the reason smoking is correct is because your choice doesn't cause cancer.

Comment author: OrphanWilde 16 May 2016 04:03:10PM 0 points [-]

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

If the mutual correlation to both is 1, you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke. At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative. Perfect predictors necessarily make a mess of causality.

Comment author: ike 16 May 2016 05:09:27PM 0 points [-]

you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke

This implicitly assumes EDT.

At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke

But that's not what CDT counterfactuals do. You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction. If you would actually smoke, then yes, but counterfactuals don't imply there's any chance of it happening in reality.

Comment author: OrphanWilde 16 May 2016 06:30:43PM 2 points [-]

This implicitly assumes EDT.

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

But that's not what CDT counterfactuals do.

CDT assumes causality makes sense in the universe. Your hypotheticals don't take place in a universe with the kind of causality causal decision theory depends upon.

You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction.

In the case of a perfect predictor, yes, smoking specifies which gene you have. You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe. You're a part of the universe; if the universe has a law (which it does, in our hypotheticals), the law applies to you, too.

We have a perfect predictor. We do something the perfect predictor predicted we wouldn't. There is a contradiction there, in case you didn't notice; either it's not, in fact, the perfect predictor we specified, or we didn't do the thing. One or the other. And our hypothetical universe is constructed such that the perfect predictor is a perfect predictor; therefore, we don't get to violate its predictions.

Comment author: ike 16 May 2016 06:41:31PM 0 points [-]

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe.

In other words, you're denying the exact thing that CDT asserts.

There is a contradiction there

Which is what a counterfactual is.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

Comment author: OrphanWilde 16 May 2016 07:20:13PM 2 points [-]

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

No it isn't, it's a statement about the universe: If you smoke, you'll get lesions. It's written into the specification of the universe; what decision theory you use doesn't change the characteristics of the universe.

In other words, you're denying the exact thing that CDT asserts.

No. You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory. Causality in our hypothetical works differently.

Which is what a counterfactual is.

No it isn't.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

No it isn't. In terms of CDT, we can say that smoking causes the gene; this isn't wrong, because, according to the universe, anybody who smokes has the gene; if they didn't, they do now, because the correlation is guaranteed by the laws of the universe. No matter how much work you prepared to ensure you didn't have the gene in advance of smoking, the law of the universe says you have it now. No matter how many tests you ran, they were all wrong.

It may seem unintuitive and bizarre, because our own universe doesn't behave this way - but when you find yourself in an alien universe, stomping your foot and insisting that the laws of physics should behave the way you're used to them behaving is a fast way to die. Once you introduce a perfect predictor, the universe must bend to ensure the predictions work out.

Comment author: ike 16 May 2016 08:18:36PM 0 points [-]

You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory.

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

In terms of CDT, we can say that smoking causes the gene

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

You don't understand what counterfactuals are.

Comment author: OrphanWilde 16 May 2016 08:59:28PM 1 point [-]

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

Recursive causality.

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

Perfect mutual correlation means both that A->B and that B->A.

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

No it doesn't.

You don't understand what counterfactuals are.

A counterfactual is a state of existence which is not true of the universe. It is not a contradiction.

Comment author: entirelyuseless 16 May 2016 04:23:36PM 0 points [-]

"If you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative."

If Omega makes the mistake of telling someone else that he predicted that you will one-box, and that person tells you, so you then take both boxes, knowing that the million is already there, then Omega's prediction was wrong.

Omega can be a perfect predictor, but he cannot tell you his prediction, at least not if you work the way normal humans do. Likewise, a gene could be a perfect predictor, but not if you know about it, at least not if you work the way normal humans do.

Comment author: OrphanWilde 16 May 2016 06:14:01PM 0 points [-]

Trial problem:

Omega appears before you, and gives you a pencil. He tells you that, in universes in which you break this pencil in half in the next twenty seconds, the universe ends immediately. Not as a result of your breaking the pencil - it's pure coincidence that all universes in which you break the pencil, the universe ends, and in all universes in which you don't, it doesn't.

Do you break the pencil in half? It's not like you're changing anything by doing so, after all; some set of universes will end, some set won't, and you aren't going to change that.

You're just deciding which set of universes you happen to occupy. Which implies something.

Comment author: entirelyuseless 16 May 2016 07:54:44PM 0 points [-]

I don't break the pencil. But I already pointed out in Newcomb and in the Smoking Lesion that I don't care if I can change anything or not. So I don't care here either.

Comment author: entirelyuseless 14 May 2016 12:42:45AM -2 points [-]

We've had this discussion before. When you one-box, your choice does not cause the money. The money is already there or it is not. Causality does not go backwards in time.

In other words, Newcomb and the smoking lesion are identical in logical form.

Comment author: ArisKatsaris 14 May 2016 02:37:17PM 0 points [-]

When you one-box, your choice does not cause the money.

Your decision algorithm will cause the choice. The prediction of that choice, by someone knowing your decision algorithm, will have caused money.

If you want the money you should therefore be a decision algorithm that makes the choice whose prediction will cause the money.

Comment author: entirelyuseless 14 May 2016 03:20:59PM -1 points [-]

You cannot make yourself into a certain decision algorithm, just as you cannot make yourself have or not have a lesion.

Comment author: ArisKatsaris 15 May 2016 11:56:42AM 1 point [-]

You cannot make yourself into a certain decision algorithm

What, is this some sort of objection where you believe that determinism means we don't make 'real' choices'?

You could be convinced by my words and make yourself into a person who chooses to one-box. Or you could refuse to be convinced and remain a person who chooses to two-boxes.

Granted, by being "convinced" or "not convinced" it means that you're already the decision algorithm that would make that choice. So what? Whether you'll be convinced or not still affects your decision algorithm from then on.

Comment author: entirelyuseless 15 May 2016 02:18:29PM -1 points [-]

No, I don't believe that determinism means we don't make real choices. But it is also true, as you note yourself, that if I am convinced by your words, then I was already the kind of person who would be convinced, and I did not make myself into that sort of person. And likewise for the opposite case.

But I am consistent: I believe we make real choices even if Omega predicts our actions, and I also believe we make real choices even if a lesion causes them. The people arguing against my position are saying we don't make real choices in the second case, so they are the ones raising the determinism objection.

Comment author: ArisKatsaris 17 May 2016 07:29:52PM 0 points [-]

Okay, can you just state clearly whether you one-box or two-box, and whether you smoke or not-smoke in the smoking lesion problem, so that I understand what your position is, before trying to understand why it is?

Comment author: entirelyuseless 18 May 2016 02:07:06PM 0 points [-]

I take the one box in Newcomb, and I do not smoke in the smoking lesion.

My position is that they are the same problem. The million is already there or it is not, and the lesion is there or it is not. I cannot change that in either case. But I still make a real decision, one that will be correlated with the outcome, and I choose the winning one.

Comment author: Pimgd 19 May 2016 07:57:06AM *  0 points [-]

I can't even begin to model myself as "liking" smoking - it gives a disgusting smell that clings to everything and even being near second-hand smoke makes for uncomfortable breathing. If I try to model myself as someone who likes smoking, I don't see myself living, because I've been altered beyond recognition.

Add to that that it seems to be a problem without a correct answer ("yes" seems to be the preferred option, given that there is no statement that you prefer smoking without cancer over smoking with cancer, thus "you prefer to smoke" + "some cancer related stuff that you may or may not have an opinion about" = "go smoke already". But this isn't the direct correct answer because if you take another worldview and look at the problem, "to smoke is to admit that you have this genetic flaw and thus you have cancer"), and I have massive problems when it comes to understanding this sort of thing.

This question seems to have the same thing going on - pick one! A) "everyone is tortured" or B) "everyone gets a dust speck". But wait, there's some numbers going on in the background where there's either a lot of clones of you or only one of you. And if everyone gets tortured then there's only one of you. Here it is left unsaid that torture is far far far worse than the dust speck for a single individual, but the issue remains: I see "Do a really really really bad thing" or "Do a meh thing" and then some fancy attempts to trip up various logic systems - What about the logic that, hey, A is always worse than B? ... I guess you could fix this by there being OTHER people present, so that it's a "you get tortured" vs "you and everyone else (3^^^3) get a dust speck"... but then there'd be loopholes in the region of "yes, but my preferences prefer a world where there are people other than me, so I'll take torture if that means I get to exist in such a world".

As for one-box/two-box, I'd open B up, and if it was empty I'd take the contents of A home. If it contained the cash, well, I dunno. I guess I'd leave the 1000 behind, if the whole "if you take both then B is empty" idea was true. Maybe it's false. Maybe it's true! Regardless of that, I just got a million bucks, and an extra $1000, well, that's not all that much after receiving a whole million. (Yes, you could do stuff with that money, like buying malaria nets or something, but I am not an optimal rational agent, my thinking capacity is limited, and I'd rather bank the $1m than get tripped up by $1000 because I got greedy). ... weirdly enough, if you change the numbers so that A contained $1000 and B contained $1001, I'd open up B first... and then regardless of seeing the money, I'd take A home too.

Feel free to point out the holes in my thinking - I'd prefer examples that are not too "out there" because my answers tend to not be based on the numbers but on all the circumstances around it - that $1m would see me work on what I'd want to work on for the rest of my life, and that $1000 would reduce the time I'd need to spend working for doing what I wanna do by about a month (or 3 weeks).

Comment author: gjm 19 May 2016 02:47:15PM -1 points [-]

I can't even begin to model myself as "liking" smoking

Then for the "smoking lesion" problem to be any use to you, you need to perform a sort of mental translation in which it isn't about smoking but about some other (perhaps imaginary) activity that you do enjoy but is associated with harmful outcomes. Maybe it's eating chocolate and the harmful outcome is diabetes. Maybe it's having lots of sex and the harmful outcome is syphilis. Maybe it's spending all your time sitting alone and reading and the harmful outcome is heart disease. The important thing is to keep the structure of the thing the same: doing X is associated with bad outcome Y, it turns out (perhaps surprisingly) that this is not because X causes Y but because some other thing causes both X and Y, you find yourself very much wanting to do X, so now what do you do?

Comment author: Pimgd 19 May 2016 09:01:21AM *  0 points [-]

I went looking around on wikipedia and found Kavka's toxin puzzle which seems to be about "you can get a billion dollars if you intend to drink this poison (which will hurt a lot for a whole day similar to the worst torture imaginable but otherwise leave no lasting effects) tomorrow evening, but I'll pay you tonight"... but there I don't get the paradox either - whats stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ... Possibly by force. Possibly by relying on saying things in a manner that you don't know that he knows he has to do this. Possibly with a whole lot of actors. Like scheduling a text "I am perfectly fine, there is nothing wrong with me" to parents and friends to be sent tomorrow morning.

Of course, this relies on my ability to raise the probability of intervention, but that seems like an easier challenge than engaging in willful doublethink... ... or you'd perhaps add various chemicals to your food the next day - I know I can be committed to an idea (I will do this task tonight), come home, eat dinner, and then I'd be totally uncommitted (that task can wait, I will play games first).

... A billion is a lot of money, perhaps I'd drink the poison and then have a hired person drug me to a coma, to be awoken the next day? You could hire a lot of medical staff with that kind of money.

Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?

Comment author: Pimgd 19 May 2016 08:25:15AM 0 points [-]

I get the feeling maybe this ought to be two comments, one on the main thread and one here. But they're too entangled.

Comment author: Lumifer 18 May 2016 02:30:27PM 0 points [-]

But I still make a real decision

Leaving Newcomb aside for the moment, in the smoking lesion case your decision is predetermined and you have no choice in the matter. I don't see how that counts as "a real decision".

Comment author: ArisKatsaris 18 May 2016 08:10:50PM 1 point [-]

"your decision is predetermined and you have no choice in the matter."

Is LW now populated by the sort of people who haven't even heard of compatibilism and of the idea that determinism not only doesn't contradict having a choice, but is actually fundamental to the process of decision-making? You can only "choose", if your values and personality can determine the outcome.

Comment author: entirelyuseless 18 May 2016 02:48:28PM 0 points [-]

I agree that this is what most people think, but it is a mistake.

I don't agree to leave Newcomb aside in considering this, because my position is that they are the same problem. If I have no choice in the smoking lesion, I have no choice in Newcomb.

Consider the Newcomb case.

I exist, and my brain and body are in a certain condition. I did not put them in that condition. I cannot make them not have been in that condition.

Omega looks at me. Using the condition of my brain and body -- conditions over which I have no control whatsoever -- he determines whether I am going to choose one box or two boxes. He has 100% accuracy, and this implies that the situation is completely determined by the condition of my brain and body.

In other words, "the condition of my brain and body" functions exactly like the lesion. It completely "predetermines" the outcome. If I have no choice in the lesion case, I have no choice in Newcomb.

Nonetheless, I say I have a choice in Newcomb, because the condition of my brain and body imply that I will engage in a certain process of reasoning, considering the alternatives of one boxing and two boxing, and choose one of them.

Likewise, I have a choice in the lesion case, because the lesion implies that I will engage in a certain process of reasoning, considering the alternatives of smoking and not smoking, and choose one of them.

In both cases, the outcome is predetermined. In both cases, the outcome is the result of a choice that results from a process of thought.

Comment author: ike 14 May 2016 12:47:51AM 0 points [-]

I'm referring to TDT, which disagrees.

Comment author: entirelyuseless 14 May 2016 02:35:11PM *  -1 points [-]

Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.

Comment author: ike 14 May 2016 05:24:29PM 0 points [-]

They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.

It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.

Comment author: entirelyuseless 14 May 2016 09:25:49PM *  0 points [-]

If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.

Comment author: ike 14 May 2016 09:54:27PM 0 points [-]

In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.

Comment author: entirelyuseless 15 May 2016 01:49:14AM *  0 points [-]

Exactly. There is no difference; either way you should not smoke.

Also, what do you mean by saying that there is "counterfactual dependence" in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?

Comment author: ike 15 May 2016 02:15:20AM -1 points [-]

So you're not just arguing with Eliezer, you're arguing with the entirety of causal decision theory.

I strongly suspect you don't understand causal decision theory at this point, or counterfactuals as used by it. If this is the case, see https://en.wikipedia.org/wiki/Causal_decision_theory, or http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/, or https://wiki.lesswrong.com/wiki/Causal_Decision_Theory

Those links explain it better than I can quickly, but I'll try anyway: counterfactuals ask "if you reached into the universe from outside and changed A, what would happen?" Only things caused by A change, not things merely correlated with A.

Comment author: entirelyuseless 15 May 2016 02:18:57AM *  0 points [-]

I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.

(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)