Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wedrifid comments on Mysterious Answers to Mysterious Questions - Less Wrong

72 Post author: Eliezer_Yudkowsky 25 August 2007 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (152)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 10 October 2012 01:37:28PM *  3 points [-]

If you know your attempt will fail in advance, you don't need to try very hard.

I actually don't believe this is true, for most mechanisms of "mysterious future knowledge", including most (philosophical) forms of time travel that don't allow change. Unless I had some specific details about the mechanism of prediction that changed the situation I would go ahead and try very hard despite knowing it is futile. I know this is a total waste... it's as if I am just leaving $10,000 on the ground or something! (ie. I assert that newcomblike reasoning applies.)

Comment author: MugaSofer 16 October 2012 03:14:59PM 0 points [-]

I don't understand this.

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

Comment author: wedrifid 16 October 2012 11:48:38PM *  2 points [-]

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

In this case you are trying (futilely) so that you, very crudely speaking, are less likely to be in the futile situation in the first place.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

Yes, then it wouldn't be Newcomb's Problem. The important feature in the problem isn't boxes with arbitrary amounts of money in them. It is about interacting with a powerful predictor whose prediction has already been made and acted upon. See, in particular, the Transparent Newcomb's Problem (where you can ourtright see how much money is there). That makes the situation seem even more like this one.

Even closer would be the Transparent Newcomb's Problem combined with an Omega that is only 99% accurate. You find yourself looking at an empty 'big' box. What do you do? I'm saying you still one box the empty box. That makes it far less likely that you will be in a situation where you see an empty box at all.

Comment author: MugaSofer 17 October 2012 04:35:05PM 0 points [-]

Being a person who avoids plane crashes makes it less likely that you will be told "you will die in a plane crash", yes.

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

Comment author: wedrifid 17 October 2012 10:42:58PM 1 point [-]

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

Absolutely. And I'm saying that you update that probability, perform a (naive) expected utility function calculation that says "don't bother trying to prevent plane crashes" then go ahead and try to avoid plane crashes anyway. Because in this kind of situation maximising expected utility is actually a mistake.

(To those who consider this claim to be bizarre without seeing context, note that we are talking situations such as within time-loops.)

Comment author: MugaSofer 18 October 2012 08:09:16AM 0 points [-]

Because in this kind of situation maximising expected utility is actually a mistake.

So ... I should do things that result in less expected utility ... why?

Comment author: wedrifid 18 October 2012 09:28:20AM 0 points [-]

So ... I should do things that result in less expected utility ... why?

I am happy to continue the conversation if you are interested. I am trying to unpack just where your intuitions diverge from mine. I'd like to know what your choice would be when faced with Newcomb's Problem with transparent boxes and an imperfect predictor when you notice that the large box is empty. I take the empty large box, which isn't a choice that maximises my expected utility and in fact gives me nothing, which is the worst possible outcome from that game. What do you do?

Comment author: MugaSofer 18 October 2012 12:36:02PM *  0 points [-]

Oh, so you pay counterfactual muggers?

All is explained.

Comment author: ArisKatsaris 18 October 2012 01:20:17PM 2 points [-]

The counterfactual mugging isn't that strange if you think of it as a form of entrance fee for a positive-expected-utility bet -- a bet you happened to lose in this instance, but it is good to have the decision theory that will allow you to enter it in the abstract.

The problem is that people aren't that good in understanding that your specific decision isn't separate from your decision theory under a specific context ... DecisionTheory(Context)=Decision. To have your decision theory be a winning decision theory in general, you may have to eventually accept some individual 'losing' decisions: That's the price to pay for having a winning decision theory overall.

Comment author: MugaSofer 19 October 2012 09:09:05AM 0 points [-]

I doubt that a decision theory that simply refuses to update on certain forms of evidence can win consistently.

Comment author: wedrifid 19 October 2012 01:51:33AM *  0 points [-]

Oh, so you pay counterfactual muggers?

If the coin therein is defined as a quantum one then yes, without hesitation. If it is a logical coin then things get complicated.

All is explained.

This is more ambiguous than you realize. Sure, the dismissive part came through but it doesn't quite give your answer. ie. Not all people would give the same response to counterfactual mugging as Transparent Probabilistic Newcomb's and you may notice that even I had to provide multiple caveats to provide my own answer there despite for most part making the same kind of decision.

Let's just assume your answer is "Two Box!". In that case I wonder whether the problem is that you just outright two box on pure Newcomb's Problem or whether you revert to CDT intuitions when the details get complicated. Assuming you win at Newcomb's Problem but two box on the variant then I suppose that would indicate the problem is in one of:

  • Being able to see the money rather than being merely being aware of it through abstract thought switched you into a CDT based 'near mode' thought pattern.
  • Changing the problem from a simplified "assume a spherical cow of uniform density" problem to one that actually allows uncertainty changes things for you. (It does for some.)
  • You want to be the kind of person who two-boxes when unlucky even though this means that you may actually not have been unlucky at all but instead have manufactured your own undesirable circumstance. (Even more people stumble here, assuming they get this far.)

The most generous assumption would be that your problem comes at the final option---that one is actually damn confusing. However I note that your previous comments about always updating on the free money available and then following expected utility maximisation are only really compatible with the option "outright two box on simple Newcomb's Problem". In that case all the extra discussion here is kind of redundant!

I think we need a nice simple visual taxonomy of where people fall regarding decision theoretic bullet-biting. It would save so much time when this kind of thing. Then when a new situation comes up (like this one with dealing with time traveling prophets) we could skip straight to, for example, "Oh, you're a Newcomb's One-Boxer but a Transparent Two-Boxer. To be consistent with that kind of implied decision algorithm then yes, you would not bother with flight-risk avoidance."

Comment author: Strange7 18 October 2012 01:51:24PM 0 points [-]

Two boxes, sitting there on the ground, unguarded, no traps, nobody else has a legal claim to the contents? Seriously? You can have the empty one if you'd like, I'll take the one with the money. If you ask nicely I might even give you half.

I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.

Comment author: thomblake 18 October 2012 02:54:50PM 0 points [-]

I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.

A million dollars.

Comment author: Strange7 18 October 2012 03:11:35PM 0 points [-]

No, you're not. You're getting an empty box, and hoping that by doing so you'll convince Omega to put a million dollars in the next box, or in a box presented to you in some alternate universe.

Comment author: Strange7 18 October 2012 04:27:12PM 2 points [-]

In the specific "infallible oracle says you're going to die in a plane crash" scenario, you might live considerably longer by giving the cosmos fewer opportunities to throw plane crashes at you.

Comment author: MugaSofer 19 October 2012 09:01:43AM 0 points [-]

I was assuming a time was given. wedrifid was claiming that you should avoid plane-crash causing actions even if you know that the crash will occur regardless.

Comment author: CCC 19 October 2012 09:20:15AM 0 points [-]

If you know the time, then that becomes even easier to deal with - there's no particular need to avoid plane crash opportunities that do not take place at that time. In fact, it then becomes possible to try to avoid it by other means - for example, faking your own plane-crash-related demise and leaving the fake evidence there for the time traveller to find.

If you know the time of your death in advance, then the means become important only at or near that time.

Comment author: wedrifid 19 October 2012 10:26:35AM 2 points [-]

Let's take this a step further. (And for this reply I will neglect all acausal timey-wimey manipulation considerat ions.)

If you know the time of your death you have the chance to exploit your temporary immortality. Play Russian Roulette for cash. Contrive extreme scenarios that will either result in significant gain or certain death. The details of ensuring that it is hard to be seriously injured without outright death will take some arranging but there is a powerful "fixed point in time and space" to be exploited.

Comment author: MugaSofer 19 October 2012 10:27:16AM 0 points [-]

... True.

But you could still be injured by a plane crash or other mishap at another time, at standard probabilities.

And you should still charter your own plane to avoid collateral damage.

Comment author: wedrifid 19 October 2012 10:06:41AM 0 points [-]

Yes, you are correct. Or at least it is true that I am not trying to make a "manipulate time of death" point. Let's say we have been given a reliably predicted and literal "half life" that we know has already incorporated all our future actions.

Comment author: MugaSofer 19 October 2012 10:28:58AM 0 points [-]

OK.

So the odds of my receiving that message are the same as the odds of my death by plane, but having recieved it I can freely act to increase the odds of my plane-related death without repercussions. I think.

Comment author: shminux 17 October 2012 12:29:49AM -1 points [-]

Since you know you cannot two-box successfully, you should one-box.

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

Comment author: wedrifid 17 October 2012 12:42:31AM *  1 point [-]

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

"Free will" isn't incompatible with a predictable (by Omega) universe. I also doubt that all CDTers believe the same thing about human free will in said universe.