aspera comments on Mysterious Answers to Mysterious Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (147)
My mother's husband professes to believe that our actions have no control over the way in which we die, but that "if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!" for example.
After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you "just can't apply numbers to this," and "Well, you shouldn't tempt fate."
My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?
Pick your battles. Most people happily hold contradictory beliefs. More accurately, their professed beliefs don't always match their aliefs. You are probably just as affected as the rest of us, so start by noticing this in yourself.
(grin) Mostly, by remembering that there are lots of decent people in the world who don't think very clearly.
I jest, but the sense of the question is serious. I really do want to teach the people I'm close to how to get started on rationality, and I recognize that I'm not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?
I often say stuff like that, but I don't mean it literally. When someone says “What if you do X and Y happens?” and I think Y is ridiculously unlikely (P(Y|X) < 1e-6), I sarcastically reply “What if I don't do X, but Z happens?” where Z is obviously even more ridiculous (P(Z|~X) < 1e-12, e.g. “a meteorite falls onto my head and kills me”).
Strictly speaking, if you somehow knew in advance (time travel?) that you would "die in a plane crash", then avoiding flying would indeed, presumably, result in a plane crash occurring as you walk down the street.
If you know your attempt will fail in advance, you don't need to try very hard. If you don't, then it is reasonable to avoid dangerous situations.
I actually don't believe this is true, for most mechanisms of "mysterious future knowledge", including most (philosophical) forms of time travel that don't allow change. Unless I had some specific details about the mechanism of prediction that changed the situation I would go ahead and try very hard despite knowing it is futile. I know this is a total waste... it's as if I am just leaving $10,000 on the ground or something! (ie. I assert that newcomblike reasoning applies.)
I don't understand this.
In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.
If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.
In this case you are trying (futilely) so that you, very crudely speaking, are less likely to be in the futile situation in the first place.
Yes, then it wouldn't be Newcomb's Problem. The important feature in the problem isn't boxes with arbitrary amounts of money in them. It is about interacting with a powerful predictor whose prediction has already been made and acted upon. See, in particular, the Transparent Newcomb's Problem (where you can ourtright see how much money is there). That makes the situation seem even more like this one.
Even closer would be the Transparent Newcomb's Problem combined with an Omega that is only 99% accurate. You find yourself looking at an empty 'big' box. What do you do? I'm saying you still one box the empty box. That makes it far less likely that you will be in a situation where you see an empty box at all.
Being a person who avoids plane crashes makes it less likely that you will be told "you will die in a plane crash", yes.
But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.
Absolutely. And I'm saying that you update that probability, perform a (naive) expected utility function calculation that says "don't bother trying to prevent plane crashes" then go ahead and try to avoid plane crashes anyway. Because in this kind of situation maximising expected utility is actually a mistake.
(To those who consider this claim to be bizarre without seeing context, note that we are talking situations such as within time-loops.)
So ... I should do things that result in less expected utility ... why?
I am happy to continue the conversation if you are interested. I am trying to unpack just where your intuitions diverge from mine. I'd like to know what your choice would be when faced with Newcomb's Problem with transparent boxes and an imperfect predictor when you notice that the large box is empty. I take the empty large box, which isn't a choice that maximises my expected utility and in fact gives me nothing, which is the worst possible outcome from that game. What do you do?
Oh, so you pay counterfactual muggers?
All is explained.
Two boxes, sitting there on the ground, unguarded, no traps, nobody else has a legal claim to the contents? Seriously? You can have the empty one if you'd like, I'll take the one with the money. If you ask nicely I might even give you half.
I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.
In the specific "infallible oracle says you're going to die in a plane crash" scenario, you might live considerably longer by giving the cosmos fewer opportunities to throw plane crashes at you.
I was assuming a time was given. wedrifid was claiming that you should avoid plane-crash causing actions even if you know that the crash will occur regardless.
Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.
"Free will" isn't incompatible with a predictable (by Omega) universe. I also doubt that all CDTers believe the same thing about human free will in said universe.
I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can't predict what's meant to happen. In that case, he's equally good at predicting any outcome, so it's a perfectly uninformative hypothesis.
That was exactly my point. If he could make such a prediction, he would be correct. Since he can't...
Think of them as 3-year-olds who won't grow up until after the Singularity. Would you kick a 3-year-old who made a mistake?
Simply consider how likely it is that kicking them in the nuts will actually improve the situation.