Updated Introduction

I wrote this post roughly two years ago, and somehow it randomly got crossposted to LessWrong today. Nevertheless, despite being poorly written, I think it's neat enough that I've polished it off and left it up. It goes through a needless amount of needlessly general algebra to prove* that:

  • A prediction market populated by Kelly bettors is isomorphic to a person with a bunch of hypotheses updating with Bayes' rule.
  • This property is maintained even in the Logical Inductor-type setting where agents don't know exactly what thing they're betting on.

For a more approchable discussion of the Kelly-Bayes connection see this blog post by Buck Shlegeris.

[*] Well, the 'proof' of equilibrium behaviour in the most general case is kind of sketchy, but at any rate the most general case is too general and that proof works fine for the case of countably many possibilities.

The Kelly Criterion

The Kelly criterion for betting tells you how much to wager when someone offers you a bet. First introduced in this paper, it deals with the situation where someone is offering you a contract that pays you €1 if the event (for concreteness, you can imagine as the event that the Republican candidate wins the 2020 US Presidential election) and €0 otherwise. They are selling it for €, and your probability for is (this is equivalent to the more common formulation with odds, but it's easier for me to think about). As a result, you think that it's worth buying this contract. In fact, they will sell you a scaled-up contract of your choice: for any real number , you can buy a contract that pays you € if occurs for €, just as if you could buy copies of the original contract. The question you face is this: how much of your money should you spend on this scaled-up contract? The Kelly criterion gives you an answer: you should spend of your money.

Why would you spend this exact amount? One reason would be if you were an expected utility maximiser, and your utility was the logarithm of your wealth. Note that the logarithm is important here to make you risk averse: if you simply wanted to maximise your expected wealth after the bet, you would bet all your money. To show that expected log-wealth maximisers use the Kelly criterion, note that if your initial wealth is , you spend on the scaled contract, and occurs, you then have , while if you bet that much and does not occur, your wealth is only . The expected log-wealth maximiser therefore wants to maximise

The derivative of this with respect to is Setting this derivative to 0 and rearranging produces the stated formula.

The Wikipedia page as of 25 October 2016 gives another appealing fact about Kelly betting. Suppose that this contract-buying opportunity recurs again and again: that is, there are many events in a row that you think each independently have probability , and after your contract about resolves, you can always spend € on a contract that will pay € if happens. Suppose that you always spend of your wealth on these contracts, you make of these bets, and pay off. Then, your final wealth after the bets will be The derivative of this with respect to is

Setting this to 0 gives , and if (which it should in the long run), this simplifies to , the Kelly criterion. This makes it look like Kelly betting maximises your total wealth after the runs, so why wouldn't an expected wealth maximiser use the Kelly criterion? Well, the rule of betting all your money every chance you have leaves you with nothing if , but in the unlikely case that , the rule works out so well that expected wealth maximisers think that it's worth the risk.

Before I move on, I'd like to share one interesting fact about the Kelly criterion that gives a flavour of the later results. You might wonder what the expected utility of using the Kelly criterion is. Well, by simple substitution it's just . Ignoring the utility that you already have, this is just . Bam! Information theory!

Kelly bettors in prediction markets

Previously, we talked about the case where somebody was offering to sell you a contract at a fixed price, and all you could do was keep it. Instead, we can consider a market full of these contracts, where all of the participants are log wealth maximisers, and think about what the equilibrium price is. Our proof will be similar to the one found here.

Before diving into the math, let's clarify exactly what sort of situation we're imagining. First of all, there are going to be lots of contracts available that correspond to different outcomes in the same event, at least one of which will occur. For instance, the event could be "Who will win the Democratic nomination for president in 2020?", and the outcomes could be "Cory Booker", "Elizabeth Warren", "Martin O'Malley", and all the other candidates (once they are known - before then, the outcomes could be each member of a list of prominent Democrats and one other outcome corresponding to "someone else"). Alternatively, the event could be "What will the map of winners of each state in the 2020 presidential election be?", and the outcomes would be lists of the form "Republicans win Alabama, Democrats win Alaska, Democrats win Arizona, ...". This latter type of market actually forms a combinatorial prediction market -- by buying and short-selling bundles of contracts, you can make bets of the form "Republicans will win Georgia", "If Democrats win Ohio, then Republicans will win Florida", or "Republicans will win either North Dakota or South Dakota, but not both". Such markets are interesting for their own reasons, but we will not elaborate on them here.

We should also clarify our assumptions about the traders. The participants are log wealth maximisers who have different priors and don't think that the other participants know anything that they don't -- otherwise, the no-trade theorem could apply. We also assume that they are price takers, who decide to buy or sell contracts at whatever the equilibrium price is, not considering how their trades effect the equilibrium price.

Now that we know the market setup, we can derive the purchasing behaviour of the participants for a given market price. We will index market participants by and outcomes by . We write for the market price of the contract that pays €1 if outcome occurs, for the probability that participant assigns to outcome , and for the initial wealth of participant .

First of all, without loss of generality, we can assume that participant spends all of their wealth on contracts. This is because if they spend some money on all contracts, they are guaranteed some payoff, just as if they had saved some money. We can therefore write the amount that participant spends on contracts for outcome as , under the condition that . Then, if outcome occurs, their posterior wealth will be . We can use the method of Lagrange multipliers to determine how much participant will bet on each outcome, by maximising Taking partial derivatives, so , and so , so . Therefore, regardless of the market prices, participant will spend on contracts for outcome . You might notice that this looks different to the previous section -- this is because previously our bettor could only bet on one outcome, as opposed to betting on both.

Next, we can generalise to the case where the market participants save some amount of money, buy some contracts, and sell some others. This will be important for deriving the equilibrium market behaviour, since you can't have a market where everyone wants to buy contracts and nobody wants to sell them.

Suppose trader saves and spends on contracts for each outcome . Here, we allow to be negative - this means that will sell another trader worth of contracts, and will supply that trader with if outcome occurs. We now demand that for to make sense. Now, if outcome occurs, trader 's wealth will be -- if then the trader makes money off their contracts in outcome , and if then the trader pays their dues to the holder of the contract in outcome they sold. We'd like this to be equal to , so that the trader's wealth is the same as if they had saved all their money. This happens if , i.e. .

Now that we have the behaviour of each trader for a fixed market price, we can derive the equilibrium prices of the market. At equilibrium, supply should be equal to demand, meaning that there are as many contracts being bought as being sold: for all , . This implies that , or . It must also be the case that , since otherwise the agents could arbitrage, putting pressure on the prices to satisfy . This means that , implying that and .

Note the significance of this price: it's as if we have a Bayesian mixture where each trader corresponds to a hypothesis, our prior in hypothesis is , and the market price is the Bayesian mixture probability . How much wealth does the participant/hypothesis have after we know the outcome? Exactly , proportional to the posterior probability of that hypothesis. Our market has done an excellent job of replicating a Bayesian mixture!

But is it general enough?

You might have thought that the above discussion was sufficiently general, but you'd be wrong. It only applies to markets with a countable number of possible outcomes. Suppose instead that we're watching someone throw a dart at a dartboard, and will be able to see the exact point where the dart will land. In general, we imagine that there's a set (the dartboard) of outcomes (points on the dartboard), and you have a probability distribution that assigns probability to any event (region of the dartboard). (More technically, will be a measurable set with sigma-algebra which all our subsets will belong to, will be a probability measure, and all functions mentioned will be measurable.)

First, let's imagine that there's just one agent with wealth and probability distribution , betting against the house which has probability distribution . This agent can buy some number of contracts from the house that each pay €1 if occurs and €0 otherwise, for every (similarly to the previous section, if the agent is selling these contracts to the house). The house charges the agent the expected value of their bets: . The question: what function should the agent choose to bet with?

Our agent is an expected log wealth maximiser, so they want to choose to maximise . However, they are constrained by only betting as much money as they have (and without loss of generality, exactly as much money as they have). Therefore, the problem is to optimise the Lagrangian

To make this easier to manipulate, we're going to want to make all of this an expectation with respect to , using an object called the Radon-Nikodym derivative. Essentially, if we were thinking about as being a point on a dartboard, we could think of the probability density functions and , and it would be the case that The Radon-Nikodym derivative acts just like the factor , and is always defined as long as whenever assigns some set probability 0, does as well (otherwise, you should imagine that so isn't defined). This lets us rewrite the Lagrangian as

We have one more trick up our sleeves to maximise this with respect to . At a maximum, changing to should only change up to second order, for any small function . So,

We therefore require that for all . This can only happen when , and it's easy to check that we need . Therefore, the agent buys shares in outcome , which you should be able to check is the same as in the case of countably many contracts.

Suppose we want to express the bet equivalently as our agent saving . For this to be equivalent to the agent spending all their money, we need for all , which is easily solved.

Now, suppose we're in a market with many agents, indexed by . Each agent has wealth , probabilities , and saves . In response to a market probability , they buy contracts for outcome . What is this equilibrium market probability?

We would like to think of markets for each outcome and solve for equilibrium, but it could be that each agent assigns probability 0 to every outcome. For instance, if I'm throwing a dart at a dartboard, and your probability that I hit some region is proportional to the area of the region, then for any particular point your probability that I will hit that point is 0. If this is the case, then the equilibrium price for contracts in every outcome will be 0, which tells us nothing about how traders buy and sell these contracts. Instead, we'll imagine that there's a set of events that are mutually exclusive with the property that one of them is sure to happen -- in the case of the dartboard, this would be a collection of regions that don't overlap and cover the whole dartboard. The agents will bundle all of their contracts for outcomes of the same event, and buy and sell those together. In this case, letting be the function that is 1 if and 0 otherwise, the condition for equilibrium is

To avoid arbitrage, it must be the case that , therefore we require that for all , . Now, in the limit of there being infinitely many infinitely small sets , all sets are just a union of some of the sets , and in general we will have . This is just like the discrete case: our market prices are exactly Bayes mixture probabilities, and as a result the wealth of each agent after the bets are paid will be proportional to their posterior credence in the mixture.

Finally, it's worth noting something interesting that's perhaps more obvious in this formalism than in others. Suppose we again have a single agent betting on which outcome would occur with the house, but instead of learning the outcome , the house and agent only learn that the outcome was in some event . In this case, the agent would have spent on contracts for outcomes in , and should presumably be paid the house's posterior expected value of those contracts: Now, this is exactly what would have happened if the agent had been asked which of events through would occur: the agent would bet on each event and in the case that event occurred, would be paid . In the dartboard example, instead of learning which point the dart would land on, you only learned how many points the throw was worth and your bets only paid out accordingly, but it turned out that you bet optimally anyway, despite the payouts being different to what you thought they were. Therefore, Kelly betting has this nice property that you don't need to know exactly what you're betting on: as long as you know the space of possible outcomes, you'll bet optimally no matter what the question about the outcomes is.

New Comment
3 comments, sorted by Click to highlight new comments since:

The Kelly criterion was recently discussed on LW here.

As I start reading this post, I find myself curious how you see the goal of this post in relation to that one -- is it an alternate introduction? does it cover different ground? is there anything you disagree with in that other post (and its follow-up, The Art of the Overbet)?

I actually wrote this post almost exactly two years ago, and have no idea why it just got cross-posted to LessWrong. I mainly like my post because it covers how the Kelly criterion sort-of applies to markets where you have to predict a bunch of things but don't know what you're actually going to learn. It's also a more theoretical take on the subject. [EDIT: oh, also it goes through the proof of equivalence between a market of Kelly bettors and Bayesian updating, which is kind of nice and an interesting parallel to logical induction]

I was confused about the relationship between Kelly betting, wealth maximising and having linear utility in money. In particular, I couldn't quite work out what was supposed to change in finding the extremal point if you have linear utility.

I now realise you are supposed to take the expectation over K of your wealth, and presumably (though I didn't fancy the algebra) this suggests betting all your money