# khafra comments on On accepting an argument if you have limited computational power. - Less Wrong

22 11 January 2012 05:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 12 January 2012 01:59:44PM *  3 points [-]

The point of Pascal's Mugging dealing with a tiny probability of a really big harm vs. a high probability of a very small harm. If your mugger makes testable predictions about his power to carry out the really big harm, that turns the tiny probability into a reasonably large probability, and makes it a non-Pascalian hostage situation.

Comment author: 12 January 2012 05:36:33PM 0 points [-]

When confronted with highly speculative claims so beloved by philosophers, string theorists and certain AI apologists, my battle cry is "testable predictions!". If one argues in favor of a model that predicts a tiny probability of a really big harm, they better provide a testable justification of that model. In the case of Pascal's mugging, I have suggested a simple way to test if the model should be taken seriously. Such a test would have to be constructed specifically for each individual model, of course. If all you say is "I can't prove anything, but if I'm right, it'll be really bad", I yawn and move on.

Comment author: 12 January 2012 06:17:35PM 2 points [-]

If all you say is "I can't prove anything, but if I'm right, it'll be really bad", I yawn and move on.

This is the normal response, even here at LW--I think there's a popular misperception that LW doctrine is to give the Pascal's Mugger money. The point of the exercise is to examine the thought processes behind that intuitive, obviously correct "no," when it appears, on the surface, to be the lower expected utility option. After all, we don't want to build an AI that can be victimized by Pascalian muggers.

One popular option is the one you picked: Simply ignore probabilities below a certain threshhold, whatever the payoff. Another is to discount by the algorithmic complexity, or by the "measure" of the hostages. Yet another is to observe that, if 3^^^^3 people exist, a random person' (your) chances of being able to affect all the rest in a life-and-death way has to be scaled by 1/3^^^^3. Yet another is that, in a world where things like this happen, a dollar has near-infinite utility. Komponisto suggested that the kolmogorov complexity of 3^^^^3 deaths, or units of disutility, is much higher than that of the number 3^^^^3; so any such problem is inherently broken.

Of course, if you're not planning to build an optimizing agent, your "yawn and move on" response is fine. That's what the problem is about, not signing up for cryonics or donating to SI or whatever (the proponents of the last two argue for relatively large probabilities of extremely large utilities).

Comment author: 12 January 2012 06:39:32PM 1 point [-]

To possibly expand on khafra's point, Pascal's Mugging is basically a computational problem. An expected utility maximizer, given a reward on the order of 3^^^3, would need a probability of less than 1/3^^^3 to discount it. But given limited computational resources and limited knowledge, there isn't an obvious algorithm that both calculates expected utility properly in normal cases, and actually arrives at a probability of 1/3^^^3 in the case of Pascal's Mugging (one might need more information than is in the universe to justify a probability that small).

Comment author: 12 January 2012 07:31:32PM 0 points [-]

In the "least convenient possible world" one would reduce the number of people to something that does not stretch the computational limits. I believe that my argument still holds in that case.

Comment author: 12 January 2012 07:51:17PM 0 points [-]

I'm confused. Is your invocation of LCPW supposed to indicate something that would solve this particular decision theory problem? If so, can you provide an algorithm that will successfully maximize expected utility in general and not fail in the case of problems like Pascal's Mugging?

Comment author: 12 January 2012 09:46:15PM 0 points [-]

Since 3^^^3 is unfeasible, suppose the mugger claims to simulate and kill "only" a quadrillion humans. The number is still large enough to overload one's utility, if you assign any credence to the claim. I am no expert in decision theory, but regardless of the exact claim, if the dude refuses to credibly simulate an amoeba, your decision is simple: ignore and move on. Please feel free to provide an example of Pascal mugging where this approach (extraordinary claims require extraordinary evidence) fails.

Comment author: 12 January 2012 11:00:36PM *  2 points [-]

Pascal's mugging only works if after some point your estimated prior for someone's ability to cause utilitarian losses of size n decreases more slowly than n increases; otherwise, claims of extravagant consequences make the mugging less likely to succeed as they grow more extravagant. "Magic powers from outside the Matrix" fill that role in the canonical presentation, since while the probability of that sort of magic existing is undoubtedly extremely small we don't have any good indirect ways of estimating its probability relative to its utilitarian implications, and we can't calculate it directly for the reasons thomblake gave a few comments up.

A quadrillion humans, however, don't fit the bill. We can arrive at a reasonable estimate for what it'd take to run that kind of simulation, and we can certainly calculate probabilities that small by fairly conventional means: there's a constant factor here that I have no idea how to estimate, but 1 * 10^-15 is only about eight sigma from the mean on a standard normal distribution if I got some back-of-the-envelope math right. I'd feel quite comfortable rejecting a mugging of that form as having too little expected damage to be worth my time.

Comment author: 13 January 2012 02:55:31AM 0 points [-]

I must be missing something. To me a large number that does not require more processing power/complexity than the universe can provide is still large enough. TBH, even 10^15 looks to me too large to care, either the mugger can provide reasonable evidence or not, that's all that matters.

Comment author: 13 January 2012 03:27:45AM *  1 point [-]

If the mugger can provide reasonable evidence of his claims, it's not a decision-theoretically interesting problem; instead it becomes a straightforward, if exotic, threat. If the claim's modest enough that we can compute its probability by standard means, it becomes perfectly normal uncreditable rambling and stops being interesting from the other direction. It's only interesting because of the particular interaction between our means of updating probability values and a threat so fantastically huge that the expected loss attached to it can't be updated into neutral or negative territory by observation.

Comment author: 13 January 2012 04:26:09AM 0 points [-]

I guess that makes some philosophical sense. Not connected to any real-life decision making, though.

Comment author: 12 January 2012 10:02:29PM 0 points [-]

You can assign credence to the claim and still assign little enough that a quadrillion humans won't overload it. I think the claim the be able to simulate a quadrillion humans is a lot more probable than the claim to be able to simulate 3^^^3 (you'd need technology that almost certainly doesn't exist, but not outside-the-Matrix powers,) but I'd still rate it as being so improbable as to only account for a tiny fraction of an expected death.

Comment author: 12 January 2012 10:22:11PM 0 points [-]

I'm settling for just one quadrillion to avoid dealing with the contingency of "3^^^3 is impossible because complexity". The requirement of testability is not affected by the contingency.

Comment author: 13 January 2012 02:58:25AM 0 points [-]

If you assign the threat a probability of, say, 10^-20, the mugger is extorting considerably more dead children from you than you should expect to die if you don't comply.

Comment author: 13 January 2012 04:24:24AM *  -3 points [-]

I don't assign a positive probability until I see some evidence. Not in this case, anyway

Comment author: 12 January 2012 06:20:34PM 0 points [-]

And in the least convenient worlds?

Comment author: 12 January 2012 07:21:57PM 0 points [-]

So I recommend: limit yourself to responses of the form "I completely reject the entire basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and walk away, you've left yourself too much wiggle room.

Which contingent fact X do you mean?

Comment author: 12 January 2012 07:29:26PM 0 points [-]

That you can demand testing in many real world scenarios, a heuristic not always usable.

Or do you have a principled decision theory in mind, where testing is a key modification to the equations of expected-value etc and which defuses the mugging?

Comment author: 12 January 2012 09:52:44PM 0 points [-]

That you can demand testing in many real world scenarios, a heuristic not always usable.

As a natural scientist, I would refuse to accept untestable models. Feel free to point out where this fails in any scenario that matters.

Comment author: 12 January 2012 10:43:34PM 0 points [-]

<insert standard anti-logical positivism argument like 'how do you test unreproducible events like "human history"?'>

Comment author: 12 January 2012 10:00:14PM 0 points [-]

How do you determine if the model is testable? What if there is in principle a test, but it has unacceptable consequences in at least one reasonably probable model?

Comment author: 12 January 2012 10:27:52PM 0 points [-]

For the particular scenario described in the Pascal's mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.

Comment author: 13 January 2012 06:56:09PM 1 point [-]

That such models exist is trivial - take model A, add a single difference B, where exercising the difference is bad. For instance,

Model A: universe is a simulation Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.

Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.

Comment author: 13 January 2012 07:59:41PM 0 points [-]

By all means, apply more thought. Until then, I'm happy to stick by my testability assertion.

Comment author: 12 January 2012 05:32:50PM 0 points [-]

Conversely, the inability of a putative Pascal's Mugger to make such predictions ought to apply a significant penalty to the plausibility of its claim. And simply increasing the threatened disutility won't necessarily help, since the more powerful the entity claims to be, the greater should be the implausibility of its inability to make testable predictions.