komponisto comments on A Thought on Pascal's Mugging - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
As was pointed out in the other subthread, you are assuming the conclusion you wish to prove here, viz. that the utility function is (necessarily) bounded.
Fine, I was slightly sloppy in my original proof (not only in the way you pointed out, but also in keeping track of signs). Here is a rigorous version:
Suppose that there is nothing so bad that you would pay $5 to stop it from happening with probability 10^(-100). Let X be a state of the universe. Then u(-$5) < 10^(-100) u(X), so u(X) > 10^(100) u(-$5). Since u(X) > 10^(100) u(-$5) for all X, u is bounded below.
Similarly, suppose that there is nothing so good that you would pay $5 to have a 10^(-100) chance of it happening. Then u($5) > 10^(100) u(X) for all X, so u(X) < 10^(100) u($5), hence u is also bounded above.
Now I've given proofs that u is bounded both above and below, without looking at argmax u or argmin u (which incidentally probably don't exist even if u is bounded; it is much more likely that u asymptotes out).
My proof is still not entirely rigorous, for instance u(-$5) and u($5) will in general depend on my current level of income / savings. If you really want me to, I can write everything out completely rigorously, but I've been trying to avoid it because I find that diving into unnecessary levels of rigor only obscures the underlying intuition (and I say this as someone who studies math).
Again, why assume this?
Your question has two possible meanings to me, so I'll try to answer both.
Meaning 1: Why is this a reasonable assumption in the context of the current debate?
Answer: Because if there was something that bad, then you get Pascal's mugged in my hypothetical situation. What I have shown is that either you would give Pascal $5 in that scenario, or your utility function is bounded.
Meaning 2: Why is this a reasonable assumption in general?
Answer: Because things that occur with probability 10^(-100) don't actually happen. Actually, 10^(-100) might be a bit high, but certainly things that occur with probability 10^(-10^(100)) don't actually happen.
You seem not to have understood the post. The worse something is, the more difficult it is for the mugger to make the threat credible. There may be things that are so bad that I (or my hypothetical AI) would pay $5 not to raise their probability to 10^(-100), but such things have prior probabilities that are lower than 10^(-100), and a mugger uttering the threat will not be sufficient evidence to raise the probability to 10^(-100).
We don't need to declare 10^(-100) equal to 0. 10^(-100) is small enough already.
I have to admit that I did find the original post somewhat confusing. However, let me try to make sure that I understood it. I would summarize your idea as saying that we should have u(X) = O(1/p(X)), where u is the utility function and p is our posterior estimate of X. Is that correct? Or do you want p to be the prior estimate? Or am I completely wrong?
Yes, p should be the prior estimate. The point being that the posterior estimate is not too different from the prior estimate in the "typical" mugging scenario (i.e. someone says "give me $5 or I'll create 3^^^^3 units of disutility" without specifying how in enough detail).
So, backing up, let me put forth my biggest objections to your idea, as I see it. I will try to stick to only arguing about this point until we can reach a consensus.
I do not believe there is anything so bad that you would trade $5 to prevent it from happening with probability 10^(-500). If there is, please let me know. If not, then this is a statement that is independent of your original priors, and which implies (as noted before) that your utility function is bounded.
I concede that the condition u(X) = O(1/p(X)) implies that one would be immune to the classical version of the Pascal's mugging problem. What I am trying to say now is that it fails to be immune to other variants of Pascal's mugging that would still be undesirable. While a good decision theory should certainly be immune to [the classical] Pascal's mugging, a failure to be immune to other mugging variants still raises issues.
My claim (which I supported with math above) is that the only way to be immune to all variants of Pascal's mugging is to have a bounded utility function.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function. But let's avoid arguing about this point until we've resolved all of the issues in the preceding paragraphs.
I think that this is plausible. In the vaguer language of 0., we could wonder if "any utility function that approximates the preferences of a human being is bounded." The partner of this claim, that events with probability 10^(-500) can't happen, is also plausible. For instance, they would both follow from any kind of ultrafinitism. But however plausible we find it, none of us yet know whether it's the case, so it's valuable to consider alternatives.
Write X for a terrible thing (if you prefer the philanthropy version, wonderful thing) that has probability 10^(-500). To pay 5$ to prevent X means by revealed preference that |U(X)| > 5*10^(500). Part of Komponisto's proposal is that, for a certain kind of utility function, this would imply that X is very complicated -- too complicated for him to write down. So he couldn't prove to you (not in this medium!) that so-and-so's utility function can take values this high by describing an example of something that terrible. It doesn't follow that U(X) is always small -- especially not if we remain agnostic about ultrafinitism.
Okay, thanks. So it is the prior, not the posterior, which makes more sense (as the posterior will be in general changing while the utility function remains constant).
My objection to this is that, even though you do deal with the "typical" mugging scenario, you run into issues in other scenarios. For instance, suppose that your prior for X is 10^(-1000), and your utility for X is 10^750, which I believe fits your requirements. Now suppose that I manage to argue your posterior up to 10^(-500). Either you can get mugged (for huge amounts of money) in this circumstance, or your utility on X is actually smaller than 10^(500).
Getting "mugged" in such a scenario doesn't seem particularly objectionable when you consider the amount of work involved in raising the probability by a factor of 10^(500).
It would be money well earned, it seems to me.
I don't see how this is relevant. It doesn't change the fact that you wouldn't actually be willing [I don't think?] to make such a trade.
The mugger also doesn't have to do all the work of raising your probability by a factor of 10^(500), the universe can do most (or all) of it. Remember, your priors are fixed once and for all at the beginning of time.
In the grand scheme of things, 10^(500) isn't all that much. It's just 1661 bits.