Alsadius comments on An investment analogy for Pascal's Mugging - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
A slight error: the Kelly criterion assigns no utility. It simply maximizes your expected bankroll at any point in the non-immediate future. What you do with that bankroll and how much you care about it is unmentioned by the Kelly criterion.
It does not maximize your expected bankroll. It maximizes the expected value of the log of your bankroll. This is equivalent to saying that it is utilitarian with respect to the utility function of the log of your bankroll.
Ah, I see what you're getting at. Good catch.
Log is a monotonic function, what maximizes one maximizes the other.
Yes, but what maximizes the expected value of one does not necessarily maximize the expected value of the other. The average of zero and four is two. The average of log(zero) and log(four) is negative infinity. Ergo, if you have one dollar and you have the option of betting it so you get four dollars if you win and nothing if you lose, betting increases the expected value of your bankroll, but reduces the expected value of the log of your bankroll.
A few comments on the remainder of this subthread.
Different people are talking about different things, without making the distinctions perfectly clear. Specifically, when Lumifer says "it maximizes your expected bankroll, and maximizing your expected log bankroll is the same" I believe he means "it maximizes your expected bankroll in the long run, and maximizing your expected log bankroll in the long run is the same because in the long run you almost always get approximately your expectation". Whereas DanielLC and I have been observing that on a single iteration the Kelly rule maximizes your expected log bankroll and does not maximize your expected bankroll. (But Lumifer hasn't been saying "in the long run" every time, and we haven't been saying "on a single bet" every time, hence a lot of what I think is pseudo-disagreement.)
It is simply not true in general that maximizing E(X) is the same as maximizing E(f(X)) when f is a monotone increasing function. It is true when X is not actually random, which is kinda what happens in the long run here.
It is simply not true that the Kelly rule "assumes logarithmic utility of money". If your utility happens to be proportional to log(money) then the Kelly rule maximizes your expected utility even for a single bet, but provided your dependence of utility on money is monotone increasing, in the long run your utility will almost always be maximized by following the Kelly rule.
It isn't clear to me that the only reason why maximizing E(X) and maximizing E(log(X)) are different is that "zero is special", even when we are considering what happens in the long run. Specifically, suppose your individual bets have some nasty distribution whose tails are too fat for the variance to be defined; then it needn't be true that your performance almost always looks like its expectation. Isn't there a counterexample lurking somewhere around here?
I think it's a distraction to talk about what the Kelly rule is "trying" to do. You can say that it almost always maximizes your bankroll in the long run. You can say that it maximizes your expected utility in the short run, if its dependence on wealth is logarithmic. You can say that it gives you the warm fuzzy feeling of doing something known to be optimal. The Kelly rule doesn't care; it is what it is, whatever your goal is. And what it is is an injunction to choose each individual bet so as to maximize E(log(bankroll)).
The principle of maximizing E(log(bankroll)) can be applied in situations where the choice you have is something other than simply "how much shall I bet on this otherwise-fixed gamble?". I don't think it's a serious abuse of terminology to call this "the Kelly rule" since it's simply a generalization of Kelly to this broader class of situations.
[EDITED to fix formatting a bit.]
In particular its possible for log(X) to have well-defined variance but not X, and for E(log(X)) but not E(X) to be defined.
Re 1: looking back at the subthread, yes, I think that was the source of much confusion. I did mean maximization in the long run and for quite a while did not realize that you and DanielLC were talking about the maximization in a single iteration.
Re 4: The expectation operator is just a weighted sum (in the discrete case) or an integral (in the continuous case). I don't think it cares about the fatness of tails or whether some moments are defined or not.
Speaking generally, log(E(X)) is not the same thing as E(log(X)) (see Jensen's Inequality), but that's a different question. The question we have is that if you have some set of parameters theta that X is conditional on, does maximizing for E(X) lead you to different optimal thetas than maximizing for E(log(X))?
Re 6: Well, you have to be careful that Kelly Rule assumptions hold. It works as it works because capital growth is multiplicative, not additive, and because you expect to have many iterations of betting, for example.
The expectation operator doesn't care about fatness of tails (well, it kinda doesn't, but note that e.g. the expectation of a random variable with Cauchy distribution is undefined, precisely because of those very fat tails), but the theorem that says that in the long run your wealth is almost always close to its expectation may fail for fat-tail-related reasons.
In the present case where we're looking at long-run results only, the answer might be "no" (but -- see above -- I'm not sure it actually is). But in general, if you allow X to be any random variable rather than some kind of long-run average of well-behaved things, it is absolutely not in any way true that maximizing E(X) leads to the same parameter choices as maximizing E(log(X)).
If you want your choice to be optimal, sure. But all I'm saying is that using "the Kelly rule" to mean "making the choice that maximizes expected log bankroll" seems like a reasonable bit of terminology. Whether using the Kelly rule, in this sense, is a good idea in any given case will of course depend on all sorts of details.
Good point about Cauchy. If even the mean is undefined, all bets are off :-)
Can I get an example? Say, X is a random positive real number. For which distribution which parameters that maximize E(X) will not maximize E(log(X))?
I don't know about that. The Kelly Rule means a specific strategy in a specific setting and diluting and fuzzifying that specificity doesn't seem useful.
That is exactly what the Kelly criterion provides examples of. Let p be the probability of winning some binary bet and k the multiple of your bet that is returned to you if you win. Given an initial bankroll of 1, let theta be the proportion of it you are going to bet. Let the distribution of your bankroll after the bet be X. With probability p, X is 1+theta(k-1), and with probability 1-p, X is 1-theta. theta is a parameter of this distribution. (So are p and k, but we are interested in maximising over theta for given p and k.)
If pk > 1 then theta = 1 maximises E(X), but theta = (pk-1)/(k-1) maximises E(log(X)).
The graphs of E(X) and E(log(X)) as functions of theta look nothing like each other. The first is a linear ascending gradient, and the second rises to a maximum and then plunges to -∞.
Yep, I was wrong. Now I need to figure out why I thought I was right..
May have gotten confused because log is monotonically increasing e.g. log likelihood maximized at the same spot as likelihood. So log E(X) is maximized at the same spot as E(X). But log and E do not commute (Jensen's inequality is not called Jensen's equality, after all).
Sure. So, just to be clear, the situation is: We have real-valued random variable X depending on a single real-valued parameter t. And I claim it is possible (indeed, usual) that the choice of t that maximizes E(log X) is not the same as the choice of t that maximizes E(X).
My X will have two possible values for any given t, both with probability 1/2. They are t exp t and exp -2t.
E(log X) = 1/2 (log(t exp t) + log(exp -2t)) = 1/2 (log t + t - 2t) = 1/2 (log t - t). This is maximized at t=1. (It's also undefined for t<=0; I'll fix that in a moment.)
E(X) is obviously monotone increasing for large positive t, so it's "maximized at t=+oo". (It doesn't have an actual maximum; I'll fix that in a moment.)
OK, now let me fix those two parenthetical quibbles. I said X depends on t, but actually it turns out that t = 100.5 + 100 sin u, where u is an angle (i.e., varies mod 2pi). Now E(X) is maximized when sin u = 1, so for u = pi2; and E(log X) is maximized when 100 sin u = -99.5, so for two values of u close to -pi/2. (Two local maxima, with equal values of E(log X).)
Okay, I accept that I'm wrong and you're right. Now the interesting part is that my mathematical intuition is not that great, but this is a pretty big fail even for it. So in between googling for crow recipes, I think I need to poke around my own mind and figure out which wrong turn did it happily take... I suspect I got confused about the expectation operator, but to confirm I'll need to drag my math intuition into the interrogation room and start asking it pointed questions.
Upvoted for public admission of error :-).
(In the unlikely event that I can help with the brain-fixing, e.g. by supplying more counterexamples to things, let me know.)