TobyBartels comments on Harry Potter and the Methods of Rationality discussion thread, part 5 - Less Wrong

6 Post author: NihilCredo 02 November 2010 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (648)

You are viewing a single comment's thread. Show more comments above.

Comment author: TobyBartels 18 November 2010 08:28:53AM 2 points [-]

I would say that your first belief implies that what appears to be a decision in this case is in fact not a decision, but rather a working out of the inevitable consequences of an earlier state.

Do you ever make a decision that is not like this?

I think that the official Less Wrong answer to the problem of free will is that you do make a decision, since it is the consequence of your state, but this is just as a computer may make a decision, say to allocate certain CPU cycles to certain processes (which is the sort of decision that modern operating system kernels are designed to make, and one that my computer is not making very wisely at the moment, which is why I thought of it). Given the input, this decision is inevitable, but it's arguably still a decision.

Comment author: TheOtherDave 18 November 2010 04:15:16PM 2 points [-]

For what it's worth, I'm a compatibilist as well, although I don't think it's a particularly important question.

I'd merely meant to point out that if it's possible (as stipulated in this example) to predict accurately at T1 what I'm going to do at T2, then there's no new salient information added to the system after T1, so it's as reasonable to talk about Omega's behavior at T1 being determined by the state of the world at T1 as it is to talk about it as being determined retroactively by the state of the world at T2.

(Perplexed has since then pointed out that the second formulation is simpler in some sense, and therefore potentially useful, which I accept.)

That being said... as Perplexed articulates well here, it's hard to understand the purpose of decision theory in the first place from a compatibilist or determinist stance.

Comment author: David_Gerard 18 November 2010 05:26:02PM *  0 points [-]

The second formulation is simpler, but then leads to absurdities such as counterfactual mugging. This is a failure of the theory.

If you don't think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?

As it says on the wiki:

If some particular ritual of cognition—even one that you have long cherished as "rational"—systematically gives poorer results relative to some alternative, it is not rational to cling to it.

Comment author: Yvain 18 November 2010 05:51:12PM *  7 points [-]

If you don't think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?

The Less Wrong meeting, of course. I'm no Omega, but I'm smart enough to predict none of the regular people will take the deal, and most of the Less Wrongers will. That means I won't give any money to any everyday people, but after the coin flip I'll be handing out a whole bunch of suitcases with $10000 to the Less Wrongers (while also collecting a few hundred dollar bills). The average person in the Less Wrong meeting will come out $4950 richer than the person on the street.

If you mean I should do the second part, the part where I take the money, but not the first part, then it's no longer a counterfactual mugging. Then it's just me lying to people in a particularly weird way. The Less Wrongers might do worse on the completely unrelated problem of whether they believe weird lies, but I don't see much evidence for this.

Comment author: ata 18 November 2010 05:50:34PM *  3 points [-]

If you don't think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?

You can't "try a counterfactual mugging" unless you are Omega (or some other entity with a lot of money to throw away and some unusually and systematically accurate way of predicting people's behaviour under counterfactual interventions).

And if you are... then those who are inclined to pay in counterfactual mugging will win more from it on average. That's the whole point. If you accept the premises of the problem (Onega is honest and flipped a fair coin, etc.), paying really is the winningest thing to do.

Comment author: WrongBot 18 November 2010 05:53:16PM 0 points [-]

The counterfactual mugging requires that the deal be offered by an entity that is known to be both perfectly honest and a perfect predictor. If Omega tries to counterfactually mug you, you should pay him. If I try to counterfactually mug you, paying up would be significantly less wise.

A sufficiently good decision theory should get both of those cases right.

Comment author: Bongo 01 December 2010 07:29:00AM *  3 points [-]

No.

The entity doesn't have to be perfect.