by [anonymous]
2 min read

6

Related to: Some of the discussion going on here

In the LW version of Pascal's Mugging, a mugger threatens to simulate and torture people unless you hand over your wallet. Here, the problem is decision-theoretic: as long as you precommit to ignore all threats of blackmail and only accept positive-sum trades, the problem disappears.

However, in Nick Bostrom's version of the problem, the mugger claims to have magic powers and will give Pascal an enormous reward the following day if Pascal gives his money to the mugger. Because the utility promised by the mugger so large, it outweighs Pascal's probability that he is telling the truth. From Bostrom's essay:

Pascal: Gee . . . OK, don’t take this personally, but my credence that you have these magic powers whereof you speak is about one in a quadrillion.
Mugger: Wow, you are pretty confident in your own ability to tell a liar from an honest man! But no matter. Let me also ask you, what’s your probability that I not only have magic powers but that I will also use them to deliver on any promise – however extravagantly generous it may seem – that I might make to you tonight?
Pascal: Well, if you really were an Operator from the Seventh Dimension as you assert, then I suppose it’s not such a stretch to suppose that you might also be right in this additional claim. So, I’d say one in 10 quadrillion.
Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life.
Pascal: I admit I see no flaw in your mathematics.

As a result, says Bostrom, there is nothing from rationally preventing Pascal from taking the mugger's offer even though it seems intuitively unwise. Unlike the LW version, in this version the problem is epistemic and cannot be solved as easily.

Peter Baumann suggests that this isn't really a problem because Pascal's probability that the mugger is honest should scale with the amount of utility he is being promised. However, as we see in the excerpt above, this isn't always the case because the mugger is using the same mechanism to procure the utility, and our so our belief will be based on the probability that the mugger has access to this mechanism (in this case, magic), not the amount of utility he promises to give. As a result, I believe Baumann's solution to be false. 

So, my question is this: is it possible to defuse Bostrom's formulation of Pascal's Mugging? That is, can we solve Pascal's Mugging as an epistemic problem?

New Comment
37 comments, sorted by Click to highlight new comments since:

Loss aversion?

If there is a one in ten quadrillion chance of the mugger being honest, you are far more likely to have all your money taken from you by charlatans and then starve to death while homeless (thus losing your ability to even make the deal with an honest mugger!) than you are to meet an honest one before that.

Incentives?

If you give this mugger 10 bucks, then ANYONE IN THE WORLD can walk up to you and take 10 bucks from you as many times as they want using the same logic.

Assuming you have all the money in the world you are still 94.08% likely to run out of money before getting rewarded, assuming one in a quadrillion chance of winning for every $10 entered into this lottery.

If you give this mugger 10 bucks, then ANYONE IN THE WORLD can walk up to you and take 10 bucks from you as many times as they want using the same logic.

Mhm, which leads to debate about cryonics. Is it in the reference class of 'magic' or 'speculative investment'?

Anyone in the world can walk up to you and say "I'll give you one hundred quadrillion utiles."

A cryonics organization also shows you that there are others who trust them, demonstrates some scientific feasibility for the process they're carrying out, and demonstrate that it is serious enough for at least one life insurance carrier to deal with it.

These seem like very different situations to me.

If they promise you a post-Singularity awakening, it's magic, otherwise it's speculative investment.

Why wait for the mugger to make his stupendous offer? Maybe he's going to give you this stupendous blessing anyway -- can you put a sufficiently low probability on that? Don't you have to give all your money to the next person you meet? But wait! Maybe instead he intends to inflict unbounded negative utility if you do that -- what must you do to be saved from that fate? Maybe the next rock you see is a superintelligent, superpowerful alien who, for its superunintelligible reasons requires you to -- well, you get the idea.

The difference between this and the standard Mugger scenario is that by making his offer, the mugger promotes to attention the hypothesis that he presents. However, for the usual Bayesian reasons, this must at the same time promote many other unlikely hypotheses, such as the mugger being an evil tempter. I don't see any reason to suppose that the mugger's claim promotes any of these hypotheses sufficiently to distinguish the two scenarios. If you're vulnerable to Pascal's Mugger, you've already been mugged by your own decision theory.

If your decision theory has you walking through the world obsessed with tiny possibilities of vast utility fluctuations, like a placid-seeming vacuum state seething with colossal energies, then your decision theory is wrong. I propose the following constraint on utility-based rational decision theories:

The Anti-Mugging Axiom: For events E and current knowledge X, let P(E|X) = probability of E given X, U(E|X) = utility of E given X. For every state of knowledge X, P(E|X) U(E|X) is bounded over all events E.

The quantifiers here are deliberately chosen. For each X there must be an upper bound, but no bound is placed on the amount of probability-weighted utility that one might discover.

Well, it's been two-and-a-quarter years since that post, but I'll comment anyway.

Isn't the anti-mugging axiom inadequate as stated? Basically, you're saying the expected utility is bounded, but bounded by what? If the bound is, for example, equivalent to 20 happy years of life, you're going to get mugged until you can barely keep from starving. If it's less than 20 happy years of life, you probably won't bother saving for retirement (assuming I'm interpreting this correctly).

Another way of looking at it, is that, let's say the bound is b, then U(E|X) < b/P(E|X) ∀ X, ∀ E. So an event you're sure will happen can have maximum utility b, but an event that you're much less confident about can have vastly higher maximum utilities. This seems unintuitive (which is not as much of an issue as the one stated above).

Perhaps a stronger version is necessary. How about this: P(E|X) U(E|X) should tend to zero as U(E|X) tends to infinity. Or to put that with more mathematical clarity:

For any sequence of hypothetical events E_i, i=0, 1, ..., if the sequence of utilities U(E_i|X) tends to infinity then the sequence of expectations P(E_i|X) U(E_i|X) must tend to zero.

Or perhaps an even stronger "uniform" version: For every e > 0 there exists a utility u such that for every event E with U(E|X) > u, its expected utility P(E|X) U(E|X) is less than e.

I called this an axiom, but it would be more accurate to call it a principle, something that any purported decision theory should satisfy as a theorem.

Hm, to be honest, I can't quite wrap my head around the first version. Specifically, we're choosing any sequence of events whatsoever, then if the utilities of the sequence tend to infinity (presumably equivalent to "increase without bound", or maybe "increase monotonically without bound"?), then the expected utilities have to tend to zero? I feel like there's not enough description of the early parts of the sequence. E.g. if it starts off as "going for a walk in nice weather, reading a mediocre book, kissing someone you like, inheriting a lot of money from a relative you don't know or care about as you expected to do, accomplishing something really impressive...", are we supposed to reduce probabilities on this part too? And if not, then do we start when we're threatened with 3^^^3 disutilons, or only if it's 3^^^^3 or more, or something?

I don't think the second version works without setting further restrictions either, although I'm not entirely sure. E.g. choose u = (3^^^^3)^2/e, then clearly u is monotonically decreasing in e, so by the time we get to e = 3^^^^3, we get (approximately) that "an event with utility around 3^^^^3 can have utility at most 3^^^^3" with no further restrictions (since all previous e-u pairs have higher u's, and therefore do not apply to this particular event), so that doesn't actually help us any.

Anyway, it took me something like 20 minutes to decide on that, which mostly suggests that it's been too long since I did actual math. I think the most reasonable and simple solution is to just have a bounded utility function (with the main question of interest being what sort of bound is best). There are definitely some alternative, more complicated, solutions, but we'd have to figure out in what (if any) ways they are actually superior.

Here's another variation on the theme. Pascal's Reformed Mugger comes to you and offers you, one time only, any amount of utility you ask for, but asks nothing in return.

If you believe him enough that u*P(you'll get u by asking for it) is unbounded, how much do you ask for?

Do you also have to consider -u*P(you'll get -u by asking for u)?

This is similar to the formulation I gave here, but I don't think your version works. You could construct a series of different sets of knowledge X(n) that differ only in that they have different numbers n plugged in, and a bounding function B(n) such that

for all n P(E|X(n))U(E|X(n)) < B(n), but
lim[n->inf] P(E|X(n))U(E|X(n)) = inf

Basically, the mugger gets around your bound by crafting a state of knowledge X for you.

I'm pretty sure the formulation given in my linked comment also protects against Pascal's Reformed Mugger.

Basically, the mugger gets around your bound by crafting a state of knowledge X for you.

This is giving too much power to the hypothetical mugger. If he can make me believe (I should have called X prior belief rather than prior knowledge) anything he chooses, then I don't have anything. My entire state of mind is what it is only at his whim. Did you intend something less than this?

One could strengthen the axiom by requiring a bound on P(E|X) U(E|X) uniform in both E and X. However, if utiilty is unbounded, this implies that there is an amount so great that I can never believe it is attainable, even if it is. A decision theory that a priori rules out belief in something that could be true is also flawed.

He doesn't get to make you believe anything he chooses; making you believe statements of the form "The mugger said X(n)" is entirely sufficient.

There would have to be statements X(n) such that the maximum over E of P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n. I don't see why there should be, even if the maximum over E of P(E|X) U(E|X) is unbounded in X.

There would have to be statements X(n) such that the maximum over E of P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n.

Yes, and that is precisely what I said causes vulnerability to Pascal's Mugging and should therefore be forbidden. Does your version of the anti-mugging axiom ensure that no such X exists, and can you prove it mathematically?

It does not ensure that no such X exists, but I think this scenario is outside the scope of your suggestion, which is expressed in terms of P(X) and U(X), rather than conditional probabilities and utilities.

What do you think of the other potential defect in a decision theory resulting from too strong an anti-mugging axiom: the inability to believe in the possibility of a sufficiently large amount of utility, regardless of any evidence?

Oh, so that's where the confusion is coming from; the probabilities and utilities in my formulation are conditional, I just chose the notation poorly. Since X is a function of type number=>evidence-set, P(X(n)) means the probability of something (which I never assigned a variable name) given X(n), and U(X(n)) is the utility of that same thing given X. Giving that something a name, as in your notation, these would be P(E|X) and U(E|X).

Being unable to believe in sufficiently large amounts of utility regardless of any evidence would be very bad; we need to be careful not to phrase our anti-mugging defenses in ways that would do that. This is a problem with globally bounded utility functions, for example. I'm pretty sure that requiring all parameterized statements to produce expected utility that does not diverge to infinity as the parameter increases, does not cause any such problems.

It seems to me that the heart of Pascal's mugging is the question of whether the universe is much bigger than I think it is. If there's a good chance of that, then I should prioritize exploration over exploitation (if I have an unbounded utility function). An agent that claims to be able to exploit the big universe doesn't really make the problem worse.

[-][anonymous]00

Given a few additional assumptions, such as the life of the agent being indefinite, I would agree. However, that doesn't give you a reason to reject the muggers offer (I don't think), it just gives you an incentive to explore.

In the dialog you give, Pascal assigns a probability that the mugger will fulfill his promise without hearing what that promise is, then fails to update it when the promise is revealed. But after hearing the number "1000 quadrillion", Pascal would then be justified in updating his probability to something less than 1 in 1000 quadrillion.

Other known defenses against Pascal's mugging are bounded utility functions, and rounding probabilities below some noise floor to zero. Another strategy that might be less likely to carry adverse side effects would be to combine a sub-linear utility function with a prior that assigns statements involving a number N probability at most 1/N (and the Occamian prior does indeed do this).

[-][anonymous]00

In the dialog you give, Pascal assigns a probability that the mugger will fulfill his promise without hearing what that promise is, then fails to update it when the promise is revealed. But after hearing the number "1000 quadrillion", Pascal would then be justified in updating his probability to something less than 1 in 1000 quadrillion.

I think this might be it, but I'm not sure. Here is the key piece of the puzzle:

Mugger: Wow, you are pretty confident in your own ability to tell a liar from an honest man! But no matter. Let me also ask you, what’s your probability that I not only have magic powers but that I will also use them to deliver on any promise – however extravagantly generous it may seem – that I might make to you tonight?

Pascal: Well, if you really were an Operator from the Seventh Dimension as you assert, then I suppose it’s not such a stretch to suppose that you might also be right in this additional claim. So, I’d say one in 10 quadrillion.

This is why Pascal doesn't update based on the number 1000 quadrillion, because he has already stated his probability for "mugger has magic powers" x "given that mugger has magic, he will deliver on any promise", and this number is less then the utility the mugger claims he will deliver. So I suppose we could say that he isn't justified in doing so, but I don't know how well-supported that claim would be.

Other known defenses against Pascal's mugging are bounded utility functions, and rounding probabilities below some noise floor to zero. Another strategy that might be less likely to carry adverse side effects would be to combine a sub-linear utility function with a prior that assigns statements involving a number N probability at most 1/N (and the Occamian prior does indeed do this).

Yes, those would definitely work, since Bostrom is careful to exclude them in his essay. What I was looking for was more of a fully general solution.

Suppose you say that the probability that the mugger has magical powers, and will deliver on any promise he makes, is 1 in 10^30. But then, instead of promising you quadrillions of days of extra life, the mugger promises to do an easy card trick. What's your estimate of the probability that he'll deliver? (It should be much closer to 0.8 than to 10^-30).

That's because the statement "the mugger will deliver on any promise he makes" carries with it an implied probability distribution over possible promises. If he promises to do a card trick, the probability that he delivers on it is very high; if he promises to deliver quadrillions of years of life, it's very low. When you made your initial probability estimate, you didn't know which promise he was going to make. After he reveals the details, you have new information, so you have to update your probability. And if that new information includes an astronomically large number, then your new probability estimate ought to be infinitesimally small in a way that cancels out that astronomically large number.

And if that new information includes an astronomically large number, then your new probability estimate ought to be infinitesimally small in a way that cancels out that astronomically large number.

Er, can you prove that? It doesn't seem at all obvious to me that magic power improbability and magic power utility are directly proportional. Any given computation's optimization power isn't bounded in one to one correspondence by its Kolmogorov complexity as far as I can see, because that computation can still reach into other computations and flip sign bits that cause extremely widespread effects without being very complex itself. If you think there's even a small chance that you're in a computation susceptible to intervention by probable but powerful computations like that, then it's not obvious that the improbability and potential utility cancel out.

Goddammit Less Wrong the above is a brilliant counterargument and no one realizes it. I hate all of you.

Sorry for not responding earlier; I had to think about this a bit. Whether the presence of astronomically large numbers can make you vulnerable to Pascal's Mugging seems to be a property of the interaction between the method you use to assign probabilities from evidence, and your utility function. Call the probability-assignment method P(X), which takes a statement X and returns a probability; and the utility function U(X), which assigns a utility to something (such as the decision to pay the mugger) based on the assumption that X is true.

P and U are vulnerable to Pascal's Mugging if and only if you can construct sets of evidence X(n), which differ only by a single number n, such that for any utility value u, there exists n such that P(X(n))U(X(n)) > u.

Now, I really don't know of any reason apart from Pascal's Mugging why utility function-predictor pairs should have this property. But being vulnerable to Pascal's Mugging is such a serious flaw, I'm tempted to say that it's just a necessary requirement for mental stability, so any utility function and predictor which don't guarantee this when they're combined should be considered incompatible.

[-][anonymous]00

But being vulnerable to Pascal's Mugging is such a serious flaw, I'm tempted to say that it's just a necessary requirement for mental stability, so any utility function and predictor which don't guarantee this when they're combined should be considered incompatible.

Is the wording of this correct? Did you mean to say that vulnerability to Pascal's mugging is a necessary requirement for mental stability or the opposite?

No, I meant to say that immunity to Pascal's mugging is required.

I'm interpreting your stance as "the probability that your hypothesis matches the evidence is bounded by the utility it would give you if your hypothesis matched the evidence." Reductio ad absurdum: I am conscious. Tautologically true. Being conscious is to me worth a ton of utility. I should therefore disbelieve a tautology.

u is the integer returned by U for an input X? Just wanted to make sure; I'm crafting my response.

Edit: actually, I have no idea what determines u here, 'cuz if u is the int returned by U then your inequality is tautological. No?

Hmm, apparently that wasn't as clearly expressed as I thought. Let's try that again. I said that a predictor P and utility function U are vulnerable to Pascal's mugging if

exists function X of type number => evidence-set
such that X(a) differs from X(b) only in that one number appearing literally, and
forall u exists n such that P(X(n))U(X(n)) > u

The last line is the delta-epsilon definition for limits diverging to infinity. It could be equivalently written as

lim[n->inf] P(X(n))U(X(n)) = inf

If that limit diverges to infinity, then you could scale the probability down arbitrarily far and the mugger will just give you a bigger n. But if it doesn't diverge that way, then there's a maximum amount of expected utility the mugger can offer you just by increasing n, and the only way to get around it would be to offer more evidence that wasn't in X(n).

(Note that while the limit can't diverge to infinity, it is not required to converge. For example, the Pebblesorter utility function, U(n pebbles) = if(isprime(n)) 1 else 0, does not converge when combined with the null predictor P(X)=0.5.)

(The reductio you gave in the other reply does not apply, because the high-utility statement you gave is not parameterized, so it can't diverge.)

[-][anonymous]00

That's because the statement "the mugger will deliver on any promise he makes" carries with it an implied probability distribution over possible promises.

Agreed, but that's not the whole picture. Let's break this down a slightly different way: we know that p(mugger has magic) is very small number, and as you point out p(mugger will deliver on any promise) is a distribution, not a number. But we aren't just dealing with p(mugger will deliver on any promise), we are dealing with the conditional probability of p(mugger will deliver on any promise|mugger has magic) times p(mugger has magic). Though this might be a distribution based on what exactly the mugger is promising, it is still different from p(mugger will deliver on any promise), and it might still allow for a Pascal's Mugging.

This is why the card trick example doesn't work: p(mugger performs card trick) is indeed very high, but what we are really dealing with is p(mugger performs card trick|mugger has magic) times p(mugger has magic), so our probability that he does a card trick using actual magic would be extremely low.

One of the probabilities that's being used here is actually the probability that something is possible, and I think that needs to be taken into account.

Let's structure this as a game, and allow you infinite tries to win the prize. You've set the probability of winning at some insanely small fraction of a percent, but that doesn't matter because you have infinite time. Given enough plays, you should win the prize, right?

Not necessarily. Let's say that if the game is possible to win, there is a 99.99% chance of winning, and that the relevant probability in calculating the chance of winning is whether the game is winnable. In that situation, you could play the game infinitely and still lose. In the same way, you could take up this offer infinite times and never win 1,000 quadrillion happy days.

Because of this, if one of the probabilities is that something has a low chance of existing, you shouldn't play the game.

Peter Baumann suggests that this isn't really a problem because Pascal's probability that the mugger is honest should scale with the amount of utility he is being promised.

If you have a nonzero probability that the mugger can produce arbitrary amounts of utility, the mugger just has to offer you enough to outweigh the smallness of this probability, which is fixed. So this defense doesn't work.

Edit: I guess you already said this.

[-][anonymous]00

Right, that was pretty much my counter-argument against his argument.

The counter-counter argument is then that you should indeed assign a zero probability to anyone's ability to produce arbitrary amounts of utility.

Yes, I know it is rhetorically claimed that 0 and 1 are not probabilities. I suggest that this example refutes that claim. You must assign zero probability to such things, otherwise you get money-pumped, and lose.

[-][anonymous]00

Well, as someone else suggested, you could just ignore all probabilities below a certain noise floor. You don't necessarily have to assign 0 probability to those things, you could just make it a heuristic to ignore them.

All that does is adopt a different decision theory but not call it that, sidestepping the requirement to formalise and justify it. It's a patch, not a solution, like solving FAI by saying we can just keep the AI in a box.