RichardKennaway comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (163)
Why should I not attach a probability of zero to the claim that you are able to grant unbounded utility?
Let GOD(N) be the claim that you are a god with the power to grant utility at least up to 2**N. Let P(GOD(N)) be the probability I assign to this. This is a nonincreasing function of N, since GOD(N+1) implies GOD(N).
If I assign a probability to GOD(N) of 4**(-N), then the mugging fails. Of course, this implies that I have assigned GOD(infinity), the conjunction of GOD(N) over all N, a probability of zero, popularly supposed to be a sin. But while I can appreciate the reason for not assigning zero to ordinary, finite claims about the world, such as the existence of an invisible dragon in your garage, I do not see a reason to avoid this zero.
If extraordinary claims demand extraordinary evidence, what do infinite claims require?
Assigning zero probability to claims is bad because then one can't ever update to accept the claim no matter what evidence one has. Moreover, this doesn't seem to have much to do with "infinite claims" given that there are claims involving infinity that you would probably accept. For example, if we got what looked like a working Theory of Everything that implied that the universe is infinite, you'd probably assign a non-zero probability to the universe being infinite. You can't assign all hypotheses involving infinity zero probability if you want to be able to update to include them.
I linked to the article expressing that view. It makes a valid point.
I am not saying anything about all claims involving infinity. I am addressing the particular claim in the original post.
Yes, assigning GOD(infinity) a probability of zero means that no finite amount of evidence will shift that. For this particular infinite claim I don't see a problem with that.
Thoroughgoing rejection of 0 and 1 as probabilities means that you have to assign positive probability to P(A & ~A). You also have to reject real-valued variables -- the probability of a randomly thrown dart hitting a particular number on the real line is zero. Unless you can actually do these things -- actually reconstruct probability theory in a way that makes P(A|B) and P(~A|B) sum to less than 1, and prohibit uncountable measure spaces -- then claiming that you should do them anyway is to make the real insight of Eliezer's article into an empty slogan.
So how do you determine which claims you are giving a prior probability of zero and which you don't?
This connects to a deep open problem- how do we assign probabilities to the chances that we've made a logical error or miscalculated. However, even if one is willing to assign zero probability to events that contain inherent logical contradictions, that's not at all the same as assigning zero probability to a claim about the empirical world.
If claims about the empirical world can have arbitrarily small probability, then a suitable infinite conjunction of such claims has probability zero, just as surely as P(A&~A) does.
For Pascal's Mugging scenarios it just seems a reasonable thing to do. Gigantic promises undermine their own credibility, converging to zero in the limit. I don't have a formally expressed rule, but if I was going to work on decision theory I'd look into the possibility of codifying that intuition as an axiom.
What if we came up with a well-evidenced theory of everything that implied GOD(infinity)?
It's not just contrived scenarios; see http://arxiv.org/abs/0712.4318. If utility is unbounded, infinitely many hypotheses can result in utility higher than N for any N.
How is this any different than saying "until you can actually make unbounded utility functions converge properly as discussed in Peter de Blanc's paper, using expected utility maximization is an empty slogan"?
I'm not convinced by expected utility maximization either, and I can see various possibilities of ways around de Blanc's argument besides bounding utility, but those are whole nother questions.
ETA: Also, if someone claims their utility function is bounded, does that mean they're attaching probability zero to it being unbounded? If they attach non-zero probability, they run into de Blanc's argument, and if they attach zero, they've just used zero as a probability. Or is having a probability distribution over what one's utility function actually is too self-referential? But if you can't do that, how can you model uncertainty about what your utility function is?
Do you reject the VNM axioms? I have my own quibbles with them - I don't like they way they just assume that probability exists and is a real number and I don't like axiom 3 because it rules out unbounded utility functions - but they do apply in some contexts.
Can you elaborate on these?
There is no good theory of this yet. One wild speculation is to model each possible utility function as a separate agent and have them come to an agreement. Unfortunately, there is no good theory of bargaining yet either.
Not with any great weight, it's just a matter of looking at each hypothesis and thinking up a way of making it fail.
Maybe utility isn't bounded below by a computable function (and a fortiori is not itself computable). That might be unfortunate for the would-be utility maximizer, but if that's the way it is, too bad.
Or -- this is a possibility that de Blanc himself mentions in the 2009 version -- maybe the environment should not be allowed to range over all computable functions. That seems quite a strong possibility to me. Known physical bounds on the density of information processing would appear to require it. Of course, those bounds apply equally to the utility function, which might open the way for a complexity-bounded version of the proof of bounded utility.
Good point, but I find it unlikely.
This requires assigning zero probability to the hypothesis that there is no limit on the density of information processing.
I don't see any reason to dispute Axioms 2 (transitivity) and 4 (independence of alternatives), although I know some people dispute Axiom 4.
For Axiom 3 (continuity), I don't have an argument against, but it feels a bit dodgy to me. The lack of inferential distance between the construction of lotteries and the conclusion of the theorem gives me the impression of begging the question. But that isn't my main problem with the axioms.
The sticking point for me is Axiom 1, the totality of the preference relation. Why should an ideal rational agent, whatever that is, have a preference -- even one of indifference -- between every possible pair of alternatives?
"An ideal rational agent, whatever that is." Does the concept of an ideal rational agent make sense, even as an idealisation? An ideal rational agent, as described by the VNM axioms, cannot change its utility function. It cannot change its ultimate priors. These are simply what they are and define that agent. It is logically omniscient and can compute anything computable in constant time. What is this concept useful for?
It's the small world/large world issue again. In small situations, such as industrial process control, that are readily posed as optimisation problems, the VNM axioms are trivially true. This is what gives them their plausibility. In large situations, constructing a universal utility function is as hard a problem as constructing a universal prior.
How would it act if asked to choose between two options that it does not have a preference between?
It can, it just would not want to, ceteris paribus.
It is a starting point (well, a middle point). I see no reason to change my utility function or my priors; I do not desire those almost by definition. Infinite computational ability is an approximation to be correct in the future, as is, IMO, VNM axiom 3. This is what we have so far and we are working on improving it.
Suppose I randomly pick a coin from all of Coinspace and flip it. What probability do you assign to the coin landing heads? Probably around 1/2.
Now suppose I do the same thing, but pick N coins and flip them all. The probability that they all come up heads is roughly 1/2^N.
Suppose I halt time to allow this experiment to continue as long as we want, then keep flipping coins randomly picked from Coinspace until I get a tail. What is the probability I will never get a tail? It should be the limit of 1/2^N as N goes to infinity, which is 0. Events with probability of 0 are allowed -- indeed, expected -- when you are dealing with infinite probability spaces such as this one.
It's also not true that we can't ever update if our prior probability for something is 0. It is just that we need infinite evidence, which is a scary way of saying that the probability of receiving said evidence is also 0. For instance, if you flip coins infinitely many times, and I observe all but the first 10 and never see "tails" (which has a probability of 0 of happening) then my belief that all the coins landed "heads" has gone up from 0 to 1/2^10 = 1/1024.
There are only countably many hypotheses that one can consider. In the coin flip context as you've constructed the probability space there are uncountably many possible results. If one presumes that there's a really a Turing computable (or even just explicitly definable in some axiomatic framework like ZFC) set of possibilities for the behavior of the coin, then there are only countably many each with a finite probability. Obviously, this in some respects makes the math much ickier, so for most purposes it is more helpful to assume that the coin is really random.
Note also that your updating took an infinite amount of evidence (since you observed all but the first 10 flips) . So it is at least fair to say that if one assigns probability zero to something then one can't ever update in finite time, which is about as bad as not being able to update.
I introduced the concept of CoinSpace to make it clear that all the coinflips are independent of each other: if I were actually flipping a single coin I would assign it a nonzero (though very small) probability that it never lands "tails". Possibly I should have just said the independence assumption.
And yes, I agree that if we postulate a finite time condition, then P(X) = 0 means one can't ever update on X. However, in the context of this post, we don't have a finite time condition: God-TimFreeman explicitly needs to stop time in order to be able to flip the coin as many times as necessary. Once we have that, then we need to be able to assign probabilities of 0 to events that almost never happen.
The hypothesis that the universe is infinite is equivalent to the hypothesis that no matter how far you travel (in a straight line through space), you can be infinitely certain that it won't take you someplace you've been. Convincing you that the universe is infinite should be roughly as hard as convincing you that there's zero probability that the universe is infinite, because they're both claims of infinite certainty in something. (I think.)
I'd like to be able to boil that down to "infinite claims require infinite evidence", but it seems to be not quite true.
The probability is roughly the probability of consistent combined failure of all the mental systems you can use to verify that (knowable?) actual infinities are impossible; similar to your probability that 2 + 2 = 3.
Even if you do assign zero probability, what makes you think that in this specific case zero times infinity should be thought of as zero?
Because otherwise you get mugged.
You don't literally multiply 0 by infinity, of course, you take the limit of (payoff of N)*probability(you actually get that payoff) as N goes to infinity. If that limit blows up, there's something wrong with either your probabilities or your utilities. Bounding the utility is one approach; bounding the probability is another.
Your priors are what they are, so yes, you can attach a prior probability of zero to me being a god. In practice, I highly recommend that choice.
I think the universal prior (a la Solmonoff induction) would give it positive probability, FWIW. A universe that has a GOD(infinity) seems to me describable by a shorter program than one that has GOD(N) for N large enough to actually be godlike. God simply stops time, reads the universe state (with some stub substituted for himself to avoid regression), writes a new one, then continues the new one.
I thought this, but now I'm not sure. Surely, if you were God, you would be able to instantly work out BB(n) for any n. This would make you uncomputable, which would indeed mean the Solomonoff prior assigns you being God a probability of zero.
There is quite a good argument that this treatment of uncomputables is a flaw rather than a feature of the Solomonoff prior, although right now it does seem to be working out quite conveniently for us.
I agree that the Solomonoff prior isn't going to give positive probability to me having any sort of halting oracle. Hmm, I'm not sure whether inferring someone's utility function is computable. I suppose that inferring the utility function for a brain of fixed complexity when arbitrarily large (but still finite) computational capacity can be brought to bear could give an arbitrarily close approximation, so the OP could be revised to fix that. It presently doesn't seem worth the effort though -- the added verbage would obscure the main point without adding anything obviously useful.
By Rice's theorem, inferring utility functions is uncomputable in general, but it is probably possible to do for humans. If not, that would be quite a problem for FAI designers.
A bigger problem is your ability to hand out arbitrarily large amounts of utility. Suppose the universe can be simulated by an N state Turing machine, this limits the number of possible states it can occupy to a finite (but probably very large) number. This in turn bounds the amount of utility you can offer me, since each state has finite utility and the maximum of a finite set of finite numbers is finite. (The reason why this doesn't automatically imply a bounded utility function is that we are uncertain of N.)
As a result of this:
P(you can offer me k utility) > 0 for any fixed k
but
P(you can offer me x utility for any x) = 0
To be honest thought, I'm not really comfortable with this, and I think Solomonoff needs to be fixed (I don't feel like I believe with certainty that the universe is computable). The real reason why you haven't seen any of my money is that I think the maths is bullshit, as I have mentioned elsewhere.
Thinking about it more, this isn't a serious problem for the dilemma. While P(you can offer me k utility) goes to zero as k goes to infinity but there's no reason to suppose it goes faster then 1/n does.
This means you can still set a similar dilemma, with a probability of you being able to offer me 2^n utility eventually becoming greater than (1/2)^n for sufficiently large n, satisfying the conditions for a St Petersburg Lottery.
That's just Pascal's mugging, though; the problem that "the utility of a Turing machine can grow much faster than its prior probability shrinks".