OrphanWilde comments on Newcomb versus dust specks - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
For the case that dust specks aren't additive, assuming we treat copies of me as distinct entities with distinct moral weight, 3^^^3 copies of me is either a net negative - as a result of 3^^^3 lives not worth living - or a net positive - as a result of an additional 3^^^3 lives worth living. The point of the dust speck is that it has only a negligible effect; the weight of the dust speck moral issue is completely subsumed by the weight of the duplicate people issue.
If we don't treat them as distinct moral entities, well, the duplication and the dust speck doesn't enter into it.
I don't think your conceptual problem sufficiently isolates whatever moral quandary you're trying to express; there's just too much going on here.
If you smoke in the smoking lesions scenario, then you shouldn't choose your action here based on how many people would exist, because they would exist anyway. (At least in the first of three cases.)
Either you misunderstand the smoking lesions scenario and the importance between the difference between a correlation and a perfect predictor, or you're just trolling the board by throwing every decision theory edge case you can think of into a single convoluted mess.
I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?
As long as the predictor doesn't cause anything but merely informs, they're equivalent to the gene. The reason why one-boxing is correct is because your choice causes the money, while the reason smoking is correct is because your choice doesn't cause cancer.
If the mutual correlation to both is 1, you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke. At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative. Perfect predictors necessarily make a mess of causality.
This implicitly assumes EDT.
But that's not what CDT counterfactuals do. You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction. If you would actually smoke, then yes, but counterfactuals don't imply there's any chance of it happening in reality.
No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.
CDT assumes causality makes sense in the universe. Your hypotheticals don't take place in a universe with the kind of causality causal decision theory depends upon.
In the case of a perfect predictor, yes, smoking specifies which gene you have. You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe. You're a part of the universe; if the universe has a law (which it does, in our hypotheticals), the law applies to you, too.
We have a perfect predictor. We do something the perfect predictor predicted we wouldn't. There is a contradiction there, in case you didn't notice; either it's not, in fact, the perfect predictor we specified, or we didn't do the thing. One or the other. And our hypothetical universe is constructed such that the perfect predictor is a perfect predictor; therefore, we don't get to violate its predictions.
You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.
In other words, you're denying the exact thing that CDT asserts.
Which is what a counterfactual is.
Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).
No it isn't, it's a statement about the universe: If you smoke, you'll get lesions. It's written into the specification of the universe; what decision theory you use doesn't change the characteristics of the universe.
No. You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory. Causality in our hypothetical works differently.
No it isn't.
No it isn't. In terms of CDT, we can say that smoking causes the gene; this isn't wrong, because, according to the universe, anybody who smokes has the gene; if they didn't, they do now, because the correlation is guaranteed by the laws of the universe. No matter how much work you prepared to ensure you didn't have the gene in advance of smoking, the law of the universe says you have it now. No matter how many tests you ran, they were all wrong.
It may seem unintuitive and bizarre, because our own universe doesn't behave this way - but when you find yourself in an alien universe, stomping your foot and insisting that the laws of physics should behave the way you're used to them behaving is a fast way to die. Once you introduce a perfect predictor, the universe must bend to ensure the predictions work out.
What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?
"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.
CDT asserts the opposite, and so if you claim this then you disagree with CDT.
You don't understand what counterfactuals are.
"If you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative."
If Omega makes the mistake of telling someone else that he predicted that you will one-box, and that person tells you, so you then take both boxes, knowing that the million is already there, then Omega's prediction was wrong.
Omega can be a perfect predictor, but he cannot tell you his prediction, at least not if you work the way normal humans do. Likewise, a gene could be a perfect predictor, but not if you know about it, at least not if you work the way normal humans do.
Trial problem:
Omega appears before you, and gives you a pencil. He tells you that, in universes in which you break this pencil in half in the next twenty seconds, the universe ends immediately. Not as a result of your breaking the pencil - it's pure coincidence that all universes in which you break the pencil, the universe ends, and in all universes in which you don't, it doesn't.
Do you break the pencil in half? It's not like you're changing anything by doing so, after all; some set of universes will end, some set won't, and you aren't going to change that.
You're just deciding which set of universes you happen to occupy. Which implies something.
I don't break the pencil. But I already pointed out in Newcomb and in the Smoking Lesion that I don't care if I can change anything or not. So I don't care here either.
We've had this discussion before. When you one-box, your choice does not cause the money. The money is already there or it is not. Causality does not go backwards in time.
In other words, Newcomb and the smoking lesion are identical in logical form.
Your decision algorithm will cause the choice. The prediction of that choice, by someone knowing your decision algorithm, will have caused money.
If you want the money you should therefore be a decision algorithm that makes the choice whose prediction will cause the money.
You cannot make yourself into a certain decision algorithm, just as you cannot make yourself have or not have a lesion.
What, is this some sort of objection where you believe that determinism means we don't make 'real' choices'?
You could be convinced by my words and make yourself into a person who chooses to one-box. Or you could refuse to be convinced and remain a person who chooses to two-boxes.
Granted, by being "convinced" or "not convinced" it means that you're already the decision algorithm that would make that choice. So what? Whether you'll be convinced or not still affects your decision algorithm from then on.
No, I don't believe that determinism means we don't make real choices. But it is also true, as you note yourself, that if I am convinced by your words, then I was already the kind of person who would be convinced, and I did not make myself into that sort of person. And likewise for the opposite case.
But I am consistent: I believe we make real choices even if Omega predicts our actions, and I also believe we make real choices even if a lesion causes them. The people arguing against my position are saying we don't make real choices in the second case, so they are the ones raising the determinism objection.
Okay, can you just state clearly whether you one-box or two-box, and whether you smoke or not-smoke in the smoking lesion problem, so that I understand what your position is, before trying to understand why it is?
I'm referring to TDT, which disagrees.
Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.
They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.
It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.
If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.
In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.