Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

aspera comments on Mysterious Answers to Mysterious Questions - Less Wrong

71 Post author: Eliezer_Yudkowsky 25 August 2007 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (152)

Sort By: Old

You are viewing a single comment's thread.

Comment author: aspera 09 October 2012 11:07:45PM 12 points [-]

My mother's husband professes to believe that our actions have no control over the way in which we die, but that "if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!" for example.

After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you "just can't apply numbers to this," and "Well, you shouldn't tempt fate."

My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?

Comment author: shminux 09 October 2012 11:20:52PM 1 point [-]

Pick your battles. Most people happily hold contradictory beliefs. More accurately, their professed beliefs don't always match their aliefs. You are probably just as affected as the rest of us, so start by noticing this in yourself.

Comment author: TheOtherDave 09 October 2012 11:29:27PM 2 points [-]

How do you avoid kicking people in the nuts all of the time?

(grin) Mostly, by remembering that there are lots of decent people in the world who don't think very clearly.

Comment author: aspera 10 October 2012 12:16:07AM 3 points [-]

I jest, but the sense of the question is serious. I really do want to teach the people I'm close to how to get started on rationality, and I recognize that I'm not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?

Comment author: [deleted] 10 October 2012 09:24:31AM 0 points [-]

"if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!"

I often say stuff like that, but I don't mean it literally. When someone says “What if you do X and Y happens?” and I think Y is ridiculously unlikely (P(Y|X) < 1e-6), I sarcastically reply “What if I don't do X, but Z happens?” where Z is obviously even more ridiculous (P(Z|~X) < 1e-12, e.g. “a meteorite falls onto my head and kills me”).

Comment author: MugaSofer 10 October 2012 01:09:38PM 0 points [-]

Strictly speaking, if you somehow knew in advance (time travel?) that you would "die in a plane crash", then avoiding flying would indeed, presumably, result in a plane crash occurring as you walk down the street.

If you know your attempt will fail in advance, you don't need to try very hard. If you don't, then it is reasonable to avoid dangerous situations.

Comment author: wedrifid 10 October 2012 01:37:28PM *  3 points [-]

If you know your attempt will fail in advance, you don't need to try very hard.

I actually don't believe this is true, for most mechanisms of "mysterious future knowledge", including most (philosophical) forms of time travel that don't allow change. Unless I had some specific details about the mechanism of prediction that changed the situation I would go ahead and try very hard despite knowing it is futile. I know this is a total waste... it's as if I am just leaving $10,000 on the ground or something! (ie. I assert that newcomblike reasoning applies.)

Comment author: MugaSofer 16 October 2012 03:14:59PM 0 points [-]

I don't understand this.

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

Comment author: wedrifid 16 October 2012 11:48:38PM *  2 points [-]

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

In this case you are trying (futilely) so that you, very crudely speaking, are less likely to be in the futile situation in the first place.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

Yes, then it wouldn't be Newcomb's Problem. The important feature in the problem isn't boxes with arbitrary amounts of money in them. It is about interacting with a powerful predictor whose prediction has already been made and acted upon. See, in particular, the Transparent Newcomb's Problem (where you can ourtright see how much money is there). That makes the situation seem even more like this one.

Even closer would be the Transparent Newcomb's Problem combined with an Omega that is only 99% accurate. You find yourself looking at an empty 'big' box. What do you do? I'm saying you still one box the empty box. That makes it far less likely that you will be in a situation where you see an empty box at all.

Comment author: MugaSofer 17 October 2012 04:35:05PM 0 points [-]

Being a person who avoids plane crashes makes it less likely that you will be told "you will die in a plane crash", yes.

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

Comment author: wedrifid 17 October 2012 10:42:58PM 1 point [-]

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

Absolutely. And I'm saying that you update that probability, perform a (naive) expected utility function calculation that says "don't bother trying to prevent plane crashes" then go ahead and try to avoid plane crashes anyway. Because in this kind of situation maximising expected utility is actually a mistake.

(To those who consider this claim to be bizarre without seeing context, note that we are talking situations such as within time-loops.)

Comment author: MugaSofer 18 October 2012 08:09:16AM 0 points [-]

Because in this kind of situation maximising expected utility is actually a mistake.

So ... I should do things that result in less expected utility ... why?

Comment author: wedrifid 18 October 2012 09:28:20AM 0 points [-]

So ... I should do things that result in less expected utility ... why?

I am happy to continue the conversation if you are interested. I am trying to unpack just where your intuitions diverge from mine. I'd like to know what your choice would be when faced with Newcomb's Problem with transparent boxes and an imperfect predictor when you notice that the large box is empty. I take the empty large box, which isn't a choice that maximises my expected utility and in fact gives me nothing, which is the worst possible outcome from that game. What do you do?

Comment author: MugaSofer 18 October 2012 12:36:02PM *  0 points [-]

Oh, so you pay counterfactual muggers?

All is explained.

Comment author: Strange7 18 October 2012 01:51:24PM 0 points [-]

Two boxes, sitting there on the ground, unguarded, no traps, nobody else has a legal claim to the contents? Seriously? You can have the empty one if you'd like, I'll take the one with the money. If you ask nicely I might even give you half.

I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.

Comment author: Strange7 18 October 2012 04:27:12PM 2 points [-]

In the specific "infallible oracle says you're going to die in a plane crash" scenario, you might live considerably longer by giving the cosmos fewer opportunities to throw plane crashes at you.

Comment author: MugaSofer 19 October 2012 09:01:43AM 0 points [-]

I was assuming a time was given. wedrifid was claiming that you should avoid plane-crash causing actions even if you know that the crash will occur regardless.

Comment author: shminux 17 October 2012 12:29:49AM -1 points [-]

Since you know you cannot two-box successfully, you should one-box.

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

Comment author: wedrifid 17 October 2012 12:42:31AM *  1 point [-]

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

"Free will" isn't incompatible with a predictable (by Omega) universe. I also doubt that all CDTers believe the same thing about human free will in said universe.

Comment author: aspera 10 October 2012 03:57:11PM 1 point [-]

I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can't predict what's meant to happen. In that case, he's equally good at predicting any outcome, so it's a perfectly uninformative hypothesis.

Comment author: MugaSofer 16 October 2012 03:10:04PM 0 points [-]

That was exactly my point. If he could make such a prediction, he would be correct. Since he can't...

Comment author: Eliezer_Yudkowsky 10 October 2012 06:46:56PM 10 points [-]

Think of them as 3-year-olds who won't grow up until after the Singularity. Would you kick a 3-year-old who made a mistake?

Comment author: Strange7 18 October 2012 03:36:28PM 2 points [-]

Simply consider how likely it is that kicking them in the nuts will actually improve the situation.