You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on [Discussion] The Kelly criterion and consequences for decision making under uncertainty - Less Wrong Discussion

5 Post author: Metus 06 January 2013 02:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 06 January 2013 06:02:00AM 1 point [-]

In order for that to be true, you have to define "in the long run" in such a way that basically begs the question.

If you define "in the long run" to mean the expected value after than many bets, the Kelly criterion is beaten by taking whatever bet has the highest expected value. For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility. If you don't take it, your expected value is what you started with. If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

Comment author: Vaniver 06 January 2013 07:08:53PM *  3 points [-]

For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility.

A bet where you quadruple your investment has a b of 3, and p is .5. The Kelly criterion says you should bet (b*p-q)/b, which is (3*.5-.5)/3, which is one third of your bankroll every time. The expected value after n times is (4/3)^n.

The assumption of the Kelly criterion is that you get to decide the scale of your investment, and that the investment scales with your bankroll.

If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

Indeed, but the probability that the Kelly better does better than that better is 1-2^(-n)!

Comment author: jsteinhardt 06 January 2013 06:07:03AM *  2 points [-]

I think "in the long run" is used in the same sense as for the law of large numbers. The reason we get a different result is that the results of a bet constrain the possible choices for future bets, and it basically turns out that bets are roughly multiplicative in nature, hence why you want to maximize something like log(x) (because if x is multiplicative, log(x) would be additive and law of large numbers applies; that's not a proof but it's intuition).