Kelly maximizes the expected growth rate, .
I... think this is wrong? It's late and I should sleep so I'm not going to double check, but this sounds like you're saying that you can take two sequences, one has a higher value at every element but the other has a higher limit.
If something similar to what you wrote is correct, I think it will be that Kelly maximizes . That feels about right to me, but I'm not confident.
Something I've often wondered - if utility for money is logarithmic, AND maximizing expected growth means logarithmic betting in the underlying resource, should we be actually thinking log(log(n))? I think the answer is "no", because declining marginal utility is irrelevant to this - we still value more over less at all points.
No -- you should bet so as to maximize . If , and you are wagering , then bet Kelly, which optimizes . However, if for some reason you are directly wagering (which seems very unlikely), then the optimal bet is actually YOLO, not Kelly.
I think the key thing to note here is that "maximizing expected growth" looks the same whether the thing you're trying to grow is money or log-money or sqrt-money or what. It "just happens" that (at least in this framework) the way one maximizes expected growth is the same as the way one maximizes expected log-money.
I've recently written about this myself. My goal was partly to clarify this, though I don't know if I succeeded.
I think the post confuses things by motivating the Kelly bet as the thing that maximizes expected log-money, and also has other neat properties. To my mind, if you want to maximize expected log-money, you just... do the arithmetic to figure out what that means. It's not quite trivial, but it's stats-101 stuff. I don't think it seems more interesting to do the arithmetic that maximizes expected log-money compared to expected money or expected sqrt-money. Kelly certainly didn't introduce the criterion as "hey guys, here's a way to maximize expected log-money". (Admittedly, I don't much care about his framing either. The original paper is information-theoretic in a way that seems to be mostly forgotten about these days.)
To my mind, the important thing about the Kelly bet is the "almost certainly win more money than anyone using a different strategy, over a long enough time period" thing. (Which is the same as maximizing expected growth rate, when growth is exponential. If growth is linear you still might care if you're earning $2/day or $1/day, but the "growth rate" of both is 0 as defined here.) So I prefer to motivate the Kelly bet as being the thing that does that, and then say "and incidentally, turns out this also maximizes expected log-wealth, which is neat because...".
The Kelly criterion is an elegant, but often misunderstood, result in decision theory. To begin with, suppose you have some amount of some resource, which you would like to increase. (For example, the resource might be monetary wealth.) You are given the opportunity to make a series of identical bets. You determine some fraction f of your wealth to wager; then, in each bet, you gain a fraction f with probability p, and lose a fraction f with probability (1−p).[1]
In other words, suppose Wn is your wealth after n bets. We will define Zn=logWn, and we will suppose for simplicity that Z0=0. Then Zn=∑nt=1R, where R is a random variable defined as:
R={log(1+f)with probability plog(1−f)with probability (1−p)Now suppose that, for some reason, we want to maximize E[Zn]. By linearity of expectation, E[Zn]=∑nt=1E[R]. Hence, we should simply maximize E[R]. This amounts to solving:
0=∂∂fE[R]0=∂∂f[plog(1+f)+(1−p)log(1−f)]0=(1−f)p−(1−p)(1+f)f=2p−1This, f=2p−1, is known as the Kelly bet. For example, it says that if you have a 60-40 edge, then you should bet f=2(0.6)−1=0.2, i.e., bet 20% of your current wealth on each bet.
That all seems pretty reasonable. But why do we want to maximize E[Zn]? If we were to simply maximize expected wealth, i.e., E[Wn], then a straightforward calculation shows that we should not bet Kelly -- in fact, we should bet f=1 ("YOLO"), wagering the entire bankroll on every bet. This seems extremely counterintuitive, since, after n bets, our wealth would then be:
Wn={0with probability 1−pn2nwith probability pnIn other words, as n grows large, we would almost surely go bankrupt! Nevertheless, this would be the way to maximize E[Wn]. Kelly, whatever its merits, does not maximize E[Wn] -- not even in the long run. Especially not in the long run.
We now come to the perennial debate: why does Kelly seem "obviously right", and YOLO "obviously wrong"? There are many answers usually offered to this question.
First, what we believe to be the correct answer:
In a certain sense, it is as simple as that. The von Neumann-Morgenstern utility theorem (vNM) tells us that we should be optimizing E[U] for some utility function U. We know that the Kelly criterion always optimizes E[Zn]=E[logWn]. Therefore, if the Kelly criterion is optimal, it is because U=logWn.
Now, there are many other answers to "why bet Kelly?" that initially seem plausible:
So, we claim, if Kelly is optimal then it is because our utility function is U=logWn. However, this is not the whole story. The utility function U refers to the utility of wealth at the moment after the betting experiment, not the terminal utility of wealth in general. We can imagine that this experiment is just the preamble to a much longer game, in which UT is the ultimate terminal value of wealth (e.g., in number of lives saved), and we are investing over T time steps where, in each step, we have the opportunity to place a bet with some statistical edge p:(1−p). We can then use backward induction to determine the utility function that we should adopt for wealth at previous points in the game: UT−1,UT−2,…,U0. It is this final function, U0(W), that we should treat as our "utility function" in the preamble experiment.
Now, suppose we ultimately have something like this as our terminal utility function:
UT(W)={Wif W<CCotherwiseIn other words, number-of-lives-saved is linear in money up to a certain point, then flat -- an exaggerated version of the phenomenon of diminishing returns. As it turns out, when we apply backward induction for reasonably large values of T (e.g., T=100) and modest statistical edge (e.g., p=0.55), we obtain a preamble utility function U0(W) that looks something like this (taking C=1 for simplicity):
In general, this function "looks more like a logarithm" than the piecewise-linear function UT, and falls off sharply as we approach zero. Clearly it is not actually a logarithm, as it is bounded above and below (and is, in fact, equal to 1 for values W≥1). But, for a broad class of terminal utility functions UT, the resulting function U0 looks surprisingly logarithm-like.
In summary, the Kelly criterion is an elegant, and surprisingly simple, formula for optimizing E[logW]. As a general strategy, optimizing E[logW] is appealing in a number of ways:
However, we should remember that the Kelly bet, ultimately, is only an approximation. The true optimal bet -- the one that actually maximizes expected utility E[UT] -- may be significantly different, in either direction.
Acknowledgements: We would like to thank davidad for many helpful comments on earlier drafts of this article.
Note that some definitions of the Kelly betting experiment are slightly more complicated, as they presume that one wins bf with probability p and loses af with probability (1−p). In this document, for simplicity, we take a=b=1.
To show this, note that limn→∞1n∑nt=1R=δ(E[R]), and hence limn→∞W1/nn=limn→∞exp(1n∑nt=1R)=δ(expE[R]), whose expectation is maximized when we maximize E[R].