entirelyuseless comments on Newcomb versus dust specks - Less Wrong

-1 Post author: ike 12 May 2016 03:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 14 May 2016 02:35:11PM *  -1 points [-]

Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.

Comment author: ike 14 May 2016 05:24:29PM 0 points [-]

They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.

It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.

Comment author: entirelyuseless 14 May 2016 09:25:49PM *  0 points [-]

If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.

Comment author: ike 14 May 2016 09:54:27PM 0 points [-]

In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.

Comment author: entirelyuseless 15 May 2016 01:49:14AM *  0 points [-]

Exactly. There is no difference; either way you should not smoke.

Also, what do you mean by saying that there is "counterfactual dependence" in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?

Comment author: ike 15 May 2016 02:15:20AM -1 points [-]

So you're not just arguing with Eliezer, you're arguing with the entirety of causal decision theory.

I strongly suspect you don't understand causal decision theory at this point, or counterfactuals as used by it. If this is the case, see https://en.wikipedia.org/wiki/Causal_decision_theory, or http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/, or https://wiki.lesswrong.com/wiki/Causal_Decision_Theory

Those links explain it better than I can quickly, but I'll try anyway: counterfactuals ask "if you reached into the universe from outside and changed A, what would happen?" Only things caused by A change, not things merely correlated with A.

Comment author: entirelyuseless 15 May 2016 02:18:57AM *  0 points [-]

I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.

(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)

Comment author: ike 15 May 2016 02:29:02AM 0 points [-]

So why did you ask me what I meant about counterfactuals? If you take the TDT assumption that identical copies of you counterfactually effect each other, then Newcomb has counterfactual dependence and Lesions doesn't.

I'm not sure of your point here.

Comment author: entirelyuseless 15 May 2016 02:51:39AM 0 points [-]

I don't think there is any difference even with that assumption. Newcomb and the Lesion are entirely equivalent. Modify it to the situation discussed in the previous discussion of this topic. The Lesion case works like this: the lesion causes people to take two boxes, and the absence of the lesion causes people to take one box. The other parts are the same, except that Omega just checks whether you have the lesion in order to make his prediction. Then we have the two cases:

  1. Regular Newcomb. I am a certain kind of algorithm, either one that is going to one-box, or one that is going to two-box.
  2. Lesion Newcomb. I either have the lesion and am going to take both boxes, or I don't and am going to take only one.

  3. Regular Newcomb. Omega checks my algorithm and decides whether to put the million.

  4. Lesion Newcomb. Omega checks the lesion and decides whether to put the million.

  5. Regular Newcomb. I decide whether to take one or two boxes.

  6. Lesion Newcomb. I decide whether to take one or two boxes.

  7. Regular Newcomb. If I decided to take one box, it turns out that I had the one-boxing algorithm, that Omega predicted it, and I get the million. If I decided to take both boxes, the opposite occurs.

  8. Lesion Newcomb. If I decided to take one box, it turns out that I did not have the lesion, Omega saw I did not, and I get the million. If I decided to take both boxes, it turns out that I had the lesion etc.

This is a simple case of substituting terms. The cases are identical.

Comment author: ike 15 May 2016 03:05:51AM 0 points [-]

Well it depends on what procedure omega uses: you can't change the procedure and assert the same result obtains! If they predict you by simulating you, that creates a causal dependence, but not if they predict you by your genes or similar. You're not accounting for the causal relationship in your comparison.