Comment author: common_law 25 July 2014 06:02:34PM 1 point [-]

A critical mistake in the lead analysis is false assumption: where there is a causal relation between two variables, they will be correlated. This ignores that causes often cancel out. (Of course, not perfectly, but enough to make raw correlation a generally poor guide to causality.

I think you have a fundamentally mistaken epistemology, gwern: you don't see that correlations only support causality when they are predicted by a causal theory.

Comment author: common_law 10 July 2014 08:16:08PM 4 points [-]

“how else could this correlation happen if there’s no causal connection between A & B‽”

The main way to correct for this bias toward seeing causation where there is only correlation follows from this introspection: be more imaginative about how it could happen (other than by direct causation).

[The causation bias (does it have a name?) seems to express the availability bias. So, the corrective is to increase the availability of the other possibilities.]

Comment author: Luke_A_Somers 28 May 2014 04:16:44PM 1 point [-]

It seems to me that you've pretty much solved the literal interpretation of the hypothetical, putting into words what everyone was already thinking.

The more relevant one is where you generated the probability and utility estimates yourself and are trying to figure out what to do about it.

Comment author: common_law 31 May 2014 11:29:28PM 0 points [-]

I intended nothing more than to solve the literal interpretation. This isn't my beaten path. I don't intend more on the subject besides speculation about why an essentially trivial problem of "literal interpretation" has resisted articulation.

Comment author: Strilanc 28 May 2014 03:02:09AM 15 points [-]

"Solving" Pascal's Mugging involves giving an explicit reasoning system and showing that it makes the right decision.

It's not enough to just say "your confidence has to go down more than their claimed reward goes up". That part is obvious. The hard part is coming up with actual explicit rules that do that. Particularly ones that don't fall apart in other situations (e.g. the decision system "always do nothing" can't be pascal-mugged, but has serious problems).

Another thing not addressed here is that the mugger may be a hypothetical. For example, if the AI generates hypotheses where the universe affects 3^^^^3 people then all decisions will be dominated by these hypotheses because their outcomes outweigh their prior by absurd margins. How do you detect these bad hypotheses? How do you penalize them without excluding them? Should you exclude them?

Please give a more concrete situation with actual numbers and algorithms.

Comment author: common_law 31 May 2014 06:26:54PM 1 point [-]

I think you'll find the argument is clear without any formalization if you recognize that it is NOT the usual claim that confidence goes down. Rather, it's that the confidence falls below its contrary.

In philH's terms, you're engaging in pattern matching rather than taking the argument on its own terms.

Comment author: DanielLC 28 May 2014 01:09:27AM 2 points [-]

What you present is the basic fallacy of Pascal's Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.

The prior probability of X is 2^-(K-complexity of X). There are more possible universes where they carry out smaller threats, so the K-complexity is lower. What I showed is that, even if there were only a single possible universe where the threat was carried out, it's still simple enough that the K-complexity is small enough that it's worth paying the threatener.

No commenters have engaged the argument!

You gave a vague argument. Rather than giving a vague counterargument along the same lines, I just ran the math directly. You can argue that P(C|E) decreases all you want, but since I found that the actual value is still too high, it clearly doesn't decrease fast enough.

If you want the vague counterargument, it's simple: The probability that it's a lie approaches unity. It just doesn't approach it fast enough. It's a heck of a lot less likely that someone who threatens 3^^^3 lives is telling the truth than someone who's threatening one. It's just not 3^^^3 times less likely.

Comment author: common_law 31 May 2014 06:24:58PM *  0 points [-]

What you're ignoring is the comparison probability. See philH's comment.

Comment author: philh 27 May 2014 06:12:53PM 2 points [-]

So I was pattern matching this as an argument for "why the probability decreases more than we previously acknowledged, as the threat increases", but that isn't what you're going for. Attempting to summarize it in my own words:

There are three relevant events: (A) the threat will not happen; (B) not giving in to blackmail will trigger the threat; (C) giving in to blackmail will trigger the threat (or worse). As the threat increases, P(B) and P(C) both decrease, but P(C) begins to dominate P(B).

Is this an accurate summary?

Comment author: common_law 27 May 2014 06:34:08PM *  0 points [-]

It's accurate. But it's crucial, of course, to see why P(C) comes to dominate P(B), and I think this is what most commenters have missed. (But maybe I'm wrong about that; maybe its because of pattern matching.) As the threat increases, P(C) comes to dominate P(B) because the threat, when large enough, is evidence against the threatened event occurring.

Comment author: Slider 27 May 2014 04:56:57PM 0 points [-]

This solutions seems to implicitly assume that all real needs are pretty moderate. That is the only plausible reason to state a meganumber class high utility is to beat someone elses number. A matrix lord could have a different sense of time in their "home dimension" and for all I know what I do might be the culmination of bet involving the creation and destruction of whole classes of universes.

Also why what the mugger says have anythinhg to do how big of a threat the conversation is? If someone would come up to me and say "give me 5$ or I will do everything in my power to screw with you" without saying how much screwage would happen or what their power level is, it wouldn't seem that problematic. Why would the threat be more potent with stating that information? It seems it should limit the threat it poses. It would appear to me that what people say only has relevance in the limits on how I have established agenthood, ie it maxes out. If I would really be on the look out for freaky eldrich gods I should worry that the sky would fall on me - ie there is no need for the sky/might to state any threat for them to be one.

Comment author: common_law 27 May 2014 05:34:50PM *  0 points [-]

That is [it is assumed that] the only plausible reason to state a meganumber class high utility is to beat someone elses number.

It's the only reason that doesn't cancel out because it's the only one about which we have any knowledge. The higher the number, the more likely it is that the mugger is playing the "pick the highest number" game. You can imagine scenarios in which picking the highest number has some unknown significance, they cancel out, in the same way as Pascal's God is canceled by the possibility of contrary gods.

Also why what the mugger says have anythinhg to do how big of a threat the conversation is?

Same question (formally) as why should failure to confirm a theory be evidence against it.

Comment author: DanielLC 27 May 2014 04:22:48AM 1 point [-]

Short version:

Even just considering the a priori probability of them honestly threatening you before you even take into account that they threatened you, it's still enough to get Pascal mugged. The probability that they're lying increases, but not fast enough.

Long version, with math:

Consider three possibilities:

A: Nobody makes the threat

B: Someone dishonestly makes the threat

C: Someone honestly makes the threat

Note that A, B, and C form a partition. That is to say exactly one of them is true. Technically it's possible that more than one person makes the threat, and only one of them is honest, so B and C happen, so let's say that the time and place of the threat are specified as well.

C is highly unlikely. If you tried to write a program simulating that, it would be long. It would still fit within the confines of this universe. Let's be hyper-conservative, and call it K-complexity of a googol.

There is also a piece of evidence E, consisting of someone making the threat. E is consistent with B and C, but not A. This means:

P(A&E) = 0, P(B&E) = P(B), P(C&E) = P(C)

Calculating from here:

P(C|E) = P(C&E)/P(E)

= P(C)/P(E)

P(C)

= 2^-googol

So the probability of the threatener being honest is at least 2^-googol. Since the disutility of not paying him if he is honest more than 2^googol times the utility of paying him, it's worthwhile to pay.

Comment author: common_law 27 May 2014 05:14:48PM 2 points [-]

What you present is the basic fallacy of Pascal's Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.

Your formalism, in other words, doesn't model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal's wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.

Pascal's Mugging ignores that giving to the mugger might cause the calamity threatened to be more likely if you accede to the mugger than if you don't. The point of inflection is that point where the mugger's making the claim becomes evidence against it rather than for it.

No commenters have engaged the argument!

Comment author: solipsist 27 May 2014 04:07:53AM 4 points [-]

This article would benefit from working through a concrete example.

If you become super-exponentially more skeptical as the mugger invokes super-exponentially higher utilities, how do you react if the mugger tears the sky asunder?

Comment author: common_law 27 May 2014 04:51:54PM 0 points [-]

You become less skeptical, but that doesn't affect the issue presented, which concerns only the evidential force of the claim itself.

If someone tears the sky asunder, you will be more inclined to believe the threat. But after a point of increasing threat, increasing it further should decrease your expectation.

Pascal's Mugging Solved

0 common_law 27 May 2014 03:28AM

Since Pascal’s Mugging is well known on LW, I won’t describe it at length. Suffice to say that a mugger tries to blackmail you by threatening enormous harm by a completely mysterious mechanism. If the harm is great enough, a sufficiently large threat eventually dominates doubts about the mechanism.

I have a reasonably simple solution to Pascal’s Mugging. In four steps, here it is:

  1. The greater the harm, the more likely the mugger is trying to pick a greater threat than any competitor picks (we’ll call that maximizing).
  2. As the amount of harm threatened gets larger, the probability that the mugger is maximizing approaches unity.
  3. As the probability that the mugger is engaged in maximizing approaches unity, the likelihood that the mugger’s claim is true approaches zero.
  4. The probability that a contrary claim is true—that contributing to the mugger will cause the feared calamity—exceeds the probability that the mugger’s claim is true when the probability that the mugger is maximizing increases sufficiently.

Pascal’s Mugging induces us to look at the likelihood of the claim in abstraction from the fact that the claim is made. The paradox can be solved by breaking the probability that the mugger’s claim is true into two parts: the probability of the claim itself (its simplicity) and the probability that the mugger is truthful. Even if the probability of magical harm doesn’t decrease when the amount of harm increases, the probability that the mugger is truthful decreases continuously as the amount of harm predicted increases.

Solving the paradox in Pascal’s Mugging depends on recognizing that, if the logic were sound, it would engage muggers in a game where they try to pick the highest practicable number to represent the amount of harm. But this means that the higher the number, the more likely they are to be playing this game (undermining the logic believed sound).

But solving Pascal’s Mugging also depends on recognizing that the evidence that the mugger is maximizing can lower the probability below that of the same harm when no mugger has claimed it. It involves recognizing that, when it is almost certain that the claim is motivated by something unrelated to the claim’s truth, the claim can become less believable than if it hadn’t been expressed. The mugger’s maximizing motivation is evidence against his claim.

If someone presents you with a number representing the amount of threatened harm 3^3^3..., continued as long as a computer can print out when the printer is allowed for run for, say, a decade, you should think this result less probable than if someone had never presented you with the tome. While people are more likely to be telling the truth than to be lying, if you are sufficiently sure they are lying, their testimony counts against their claim.

The proof is the same as the proof of the (also counter-intuitive) proposition that failure to find (some definite amount of) evidence for a theory constitutes negative evidence. The mugger has elicited your search for evidence, but because of the mugger’s clear interest in falsehood, you find that evidence wanting.

View more: Prev | Next