You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RichardKennaway comments on Pascal's Mugging as an epistemic problem - Less Wrong Discussion

3 [deleted] 04 October 2010 05:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 06 October 2010 08:27:29AM *  1 point [-]

Why wait for the mugger to make his stupendous offer? Maybe he's going to give you this stupendous blessing anyway -- can you put a sufficiently low probability on that? Don't you have to give all your money to the next person you meet? But wait! Maybe instead he intends to inflict unbounded negative utility if you do that -- what must you do to be saved from that fate? Maybe the next rock you see is a superintelligent, superpowerful alien who, for its superunintelligible reasons requires you to -- well, you get the idea.

The difference between this and the standard Mugger scenario is that by making his offer, the mugger promotes to attention the hypothesis that he presents. However, for the usual Bayesian reasons, this must at the same time promote many other unlikely hypotheses, such as the mugger being an evil tempter. I don't see any reason to suppose that the mugger's claim promotes any of these hypotheses sufficiently to distinguish the two scenarios. If you're vulnerable to Pascal's Mugger, you've already been mugged by your own decision theory.

If your decision theory has you walking through the world obsessed with tiny possibilities of vast utility fluctuations, like a placid-seeming vacuum state seething with colossal energies, then your decision theory is wrong. I propose the following constraint on utility-based rational decision theories:

The Anti-Mugging Axiom: For events E and current knowledge X, let P(E|X) = probability of E given X, U(E|X) = utility of E given X. For every state of knowledge X, P(E|X) U(E|X) is bounded over all events E.

The quantifiers here are deliberately chosen. For each X there must be an upper bound, but no bound is placed on the amount of probability-weighted utility that one might discover.

Comment author: RichardKennaway 06 October 2010 10:51:06AM *  1 point [-]

Here's another variation on the theme. Pascal's Reformed Mugger comes to you and offers you, one time only, any amount of utility you ask for, but asks nothing in return.

If you believe him enough that u*P(you'll get u by asking for it) is unbounded, how much do you ask for?

Do you also have to consider -u*P(you'll get -u by asking for u)?