wedrifid comments on A Thought on Pascal's Mugging - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
The way around Pascal's mugging is to have a bounded utility function. Even if you are a paperclip-maximizer, your utility function is not the number of paperclips in the universe, it is some bounded function that is monotonic in the number of paperclips but asymptotes out. You are only linear in paperclips over small numbers of paperclips. This is not due to exponential discounting but because utility doesn't mean anything other than the function that we are maximizing the expected value of. It has an unfortunate namespace collision with the other utility, which is some intuitive quantification of our preferences that is probably closer to something like a description of the trades we would be willing to make. If you are unwilling to be mugged by Pascal's mugger then it simply follows as a mathematical fact that your utility is bounded by something on the order of the reciprocal of the probability that you would be un-muggable at.
For more of a description, see my post here, which originally got downvoted to oblivion because it argued from the position of a lack of knowledge of the VNM utility theorem. The post has since been fixed, and while it is not super-detailed, lays out an argument for why Pascal's mugging is resolved once we stop trying to make our utility functions look intuitive.
Incidentally, Pascal's mugging does lay out a good argument of why we need to be careful about an AGI's utility function; if we make it unbounded then we can get weird behavior indeed.
EDIT: Of course, perhaps I am still wrong somehow and there are unresolvable subtleties that I am missing. But I, at least, am simply unwilling to care about events occurring with probability 10^(-100), regardless of how bad they are.
Way around? If my utility function suggests that being mugged by Pascal is the best thing for me to do then I'll be delighted to do it.
Utility functions determine our decisions, not the reverse!
A utility function shouldn't suggest anything. It is simply an abstract mathematical function that is guaranteed to exist by the VNM utility theorem. If you're letting an unintuitive mathematical theorem tell you to do things that you don't want to do, then something is wrong.
Again, the problem is there is a namespace collision between the utility function guaranteed by VNM, which we are maximizing the expected value of, and the utility function that we intuitively associate with our preferences, which we (probably) aren't maximizing the expected value of. VNM just says that if you have consistent preferences, then there is some function whose expected value you are maximizing. It doesn't say that this function has anything to do with the degree to which you want various things to happen.
I seem to be having a lot of trouble getting this point across, so let me try to put it another way: Ignore Kolmogorov complexity, priors, etc. for a moment, and if you can, forget about your utility function and just ask yourself what you would want. Now imagine the worst possible thing that could happen (you can even suppose that both time and space are potentially infinite, so infinitely many people being tortured for infinite extents of time is fine). Let us call this thing X. Suppose that you have somehow calculated that, with probability 10^(-100), the mugger will cause X to happen if you don't pay him $5. Would you pay him? If you would pay him, then why?
I am actually quite interested in the answer to this question, because I am having trouble diagnosing the precise source of my disagreement on this issue. And even though I said to forget about utility functions, if you really think that is the answer to the "why" question, feel free to use them in your argument. As I said, at this point I am most interested in determining why we disagree, because previous discussions with other people suggest that there is some hidden inferential distance afoot.
As an aside, if you wouldn't pay him then the definition of utility implies that u($5) > 10^(-100) u(X), which implies that u(X), and therefore the entire utility function, is bounded.
As was pointed out in the other subthread, you are assuming the conclusion you wish to prove here, viz. that the utility function is (necessarily) bounded.
Fine, I was slightly sloppy in my original proof (not only in the way you pointed out, but also in keeping track of signs). Here is a rigorous version:
Suppose that there is nothing so bad that you would pay $5 to stop it from happening with probability 10^(-100). Let X be a state of the universe. Then u(-$5) < 10^(-100) u(X), so u(X) > 10^(100) u(-$5). Since u(X) > 10^(100) u(-$5) for all X, u is bounded below.
Similarly, suppose that there is nothing so good that you would pay $5 to have a 10^(-100) chance of it happening. Then u($5) > 10^(100) u(X) for all X, so u(X) < 10^(100) u($5), hence u is also bounded above.
Now I've given proofs that u is bounded both above and below, without looking at argmax u or argmin u (which incidentally probably don't exist even if u is bounded; it is much more likely that u asymptotes out).
My proof is still not entirely rigorous, for instance u(-$5) and u($5) will in general depend on my current level of income / savings. If you really want me to, I can write everything out completely rigorously, but I've been trying to avoid it because I find that diving into unnecessary levels of rigor only obscures the underlying intuition (and I say this as someone who studies math).
Again, why assume this?
Your question has two possible meanings to me, so I'll try to answer both.
Meaning 1: Why is this a reasonable assumption in the context of the current debate?
Answer: Because if there was something that bad, then you get Pascal's mugged in my hypothetical situation. What I have shown is that either you would give Pascal $5 in that scenario, or your utility function is bounded.
Meaning 2: Why is this a reasonable assumption in general?
Answer: Because things that occur with probability 10^(-100) don't actually happen. Actually, 10^(-100) might be a bit high, but certainly things that occur with probability 10^(-10^(100)) don't actually happen.
You seem not to have understood the post. The worse something is, the more difficult it is for the mugger to make the threat credible. There may be things that are so bad that I (or my hypothetical AI) would pay $5 not to raise their probability to 10^(-100), but such things have prior probabilities that are lower than 10^(-100), and a mugger uttering the threat will not be sufficient evidence to raise the probability to 10^(-100).
We don't need to declare 10^(-100) equal to 0. 10^(-100) is small enough already.
I have to admit that I did find the original post somewhat confusing. However, let me try to make sure that I understood it. I would summarize your idea as saying that we should have u(X) = O(1/p(X)), where u is the utility function and p is our posterior estimate of X. Is that correct? Or do you want p to be the prior estimate? Or am I completely wrong?
Yes, p should be the prior estimate. The point being that the posterior estimate is not too different from the prior estimate in the "typical" mugging scenario (i.e. someone says "give me $5 or I'll create 3^^^^3 units of disutility" without specifying how in enough detail).
This doesn't actually imply that the entire utility function is bounded. It is still possible that u(Y) is infinite, where Y is something that is valued positively.
As an aside we can now consider the possibility of Pascal's Samaritan.
Assume a utility function such that u(Y) is infinite (and neutral with respect to risk). Further assume that you predict that $5 would increase your chance of achieving Y by 1/3^^^3. A Pascal Samaritan can offer to pay you $5 for the opportunity to give you a 90% chance of sending the entire universe into the hell state X. Do you take the $5?
From my reply to komponisto (incidentally, both you and he seem to be making the same objections in parallel, which suggests that I'm not doing a very good job of explaining myself, sorry):
The meaning of a phrase, primarily. And slightly about the proper use of an abstract concept.
A utility function should be a representation of my values. If my values are such that paying a mugger is the best option then I am glad to pay a mugger.
If I were to pay him it would be because I happen to value not having a 10^(-100) chance of X happening more than I value $5.
My utility function quite likely is bounded. Not because that is a way around pascal's mugging. Simply because that happens to be what the arbitrary value system represented by this particular bunch of atoms happens to be.
Hm...it sounds like we agree on far more than I thought, then.
What I am saying is that my utility function is bounded because it would be ridiculous to be Pascal's mugged, even in the hypothetical universe I created that disobeys komponisto's priors. Put another way, I am simply not willing to seriously consider events at probabilities of, say, 10^(-10^(100)), because such events don't happen. For this same reason, I have a hard time taking anyone seriously who claims to have an unbounded utility function, because they would then care about events that can't happen in a sense at least as strong as the sense that 1 is not equal to 2.
Would you object to anything in the above paragraph? Thanks for bearing with me on this, by the way.
P.S. Am I the only one who is always tempted to write "mugged by Pascal" before realizing that this is comically different from being "Pascal's mugged"?
As far as I know they do happen. To know that such a number cannot represent an altogether esoteric feature of the universe that can nevertheless be the legitimate subject of infinite value I would need to know the smallest number that can be assigned to a quantum state.
(This objection is purely tangential. See below for significant disagreement.)
That isn't true. Someone can assign infinite utility to Australia winning the ashes if that is what they really want. I'd think them rather silly but that is just my subjective evaluation, nothing to do with maths.
I think you are conflating quantum probabilities with Bayesian probabilities here, but I'm not sure. Unless you think this point is worth discussing further I'll move on to your more significant disagreement.
Hm...I initially wrote a two-paragraph explanation of why you were wrong, then deleted it because I changed my mind. So, I think we are making progress!
I initially thought I accorded disdain to unbounded utility functions for the same reason that I accorded disdain to ridiculous priors. But the difference is that your priors affect your epistemic state, and in the case of beliefs there is only one right answer. On the other hand, there is nothing inherently wrong with being a paperclip maximizer.
I think the actual issue I'm having is that I suspect that most people who claim to have unbounded utility functions would have been unwilling to make the trades implied by this before reading about VNM utility / "Shut up and multiply". So my objection is not that unbounded utility functions are inherently wrong, but that they cannot possibly reflect the preferences of a human.
On this I believe we approximately agree.