# Even Odds

(Cross-posted on my personal blog, which has LaTeX, and is easier to read.)

Let's say that you are are at your local less wrong meet up and someone makes some strong claim and seems very sure of himself, "blah blah blah resurrected blah blah alicorn princess blah blah 99 percent sure." You think he is probably correct, you estimate a 67 percent chance, but you think he is way over confident. "Wanna bet?" You ask.

"Sure," he responds, and you both check your wallets and have 25 dollars each. "Okay," he says, "now you pick some betting odds, and I'll choose which side I want to pick."

"That's crazy," you say, "I am going to pick the odds so that I cannot be taken advantage of, which means that I will be indifferent between which of the two options you pick, which means that I will expect to gain 0 dollars from this transaction. I wont take it. It is not fair!"

"Okay," he says, annoyed with you. "We will both write down the probability we think that I am correct, average them together, and that will determine the betting odds. We'll bet as much as we can of our 25 dollars with those odds."

"What do you mean by 'average' I can think of at least four possibilities. Also, since I know your probability is high, I will just choose a high probability that is still less than it to maximize the odds in my favor regardless of my actual belief. Your proposition is not strategy proof."

"Fine, what do you suggest?"

You take out some paper, solve some differential equations, and explain how the bet should go.

Satisfied with your math, you share your probability, he puts 13.28 on the table, and you put 2.72 on the table.

"Now what?" He asks.

A third meet up member takes quickly takes the 16 dollars from the table and answers, "You wait."

I will now derive a general algorithm for determining a bet from two probabilities and a maximum amount of money that people are willing to bet. This algorithm is both strategy proof and fair. The solution turns out to be simple, so if you like, you can skip to the last paragraph, and use it next time you want to make a friendly bet. If you want to try to derive the solution on your own, you might want to stop reading now.

First, we have to be clear about what we mean by strategy proof and fair. "Strategy proof" is clear. Our algorithm should ensure that neither person believes that they can increase their expected profit by lying about their probabilities. "Fair" will be a little harder to define. There is more than one way to define "fair" in this context, but there is one way which I think is probably the best. When the players make the bet, they both will expect to make some profit. They will not both be correct, but they will both believe they are expected to make profit. I claim the bet is fair if both players expect to make the same profit on average.

Now, lets formalize the problem:

Alice believes S is true with probability p. Bob believes S is false with probability q. Both players are willing to bet up to d dollars. Without loss of generality, assume p+q>1. Our betting algorithm will output a dollar amount, f(p,q), for Alice to put on the table and a dollar amount, g(p,q) for Bob to put on the table. Then if S is true, Alice gets all the money, and if S is false, Bob gets all the money.

From Alice's point of view, her expected profit for Alice will be p(g(p,q))+(1-p)(-f(p,q)).

From Bob's point of view, his expected profit for Bob will be q(f(p,q))+(1-q)(-g(p,q)).

Setting these two values equal, and simplifying, we get that (1+p-q)g(p,q)=(1+q-p)f(p,q), which is the condition that the betting algorithm is fair.

For convenience of notation, we will define h(p,q) by h(p,q)=g(p,q)/(1+q-p)=f(p,q)/(1+p-q).

Now, we want to look at what will happen if Alice lies about her probability. If instead of saying p, Alice were to say that her probability was r, then her expected profit would be p(g(r,q))+(1-p)(-f(r,q)), which equals p(1+q-r)h(r,q)+(1-p)(-(1+r-q)h(r,q))=(2p-1-r+q)h(r,q).

We want this value as a function of r to be maximized when r=p, which means that -h+(2r-1-r+q)(dh/dr)=0.

Separation of variables gives us (1/h)dh=1/(-1+r+q)dr,

which integrates to ln(h)=C+ln(-1+r+q) at r=p,

which simplifies to h=e^C(-1+r+q)=e^C(-1+p+q).

This gives the solution f(p,q)=e^C(-1+p+q)(1+p-q)=e^C(p^2-(1-q)^2) and g(p,q)=e^C(-1+p+q)(1+q-p)=e^C(q^2-(1-p)^2).

It is quick to verify that this solution is actually fair, and both players' expected profit is maximized by honest reporting of beliefs.

The value of the constant multiplied out in front can be anything, and the most either player could ever have to put on the table is equal to this constant. Therefore, if both players are willing to bet up to d dollars, we should define e^C=d.

Alice and Bob are willing to bet up to d dollars, Alice thinks S is true with probability p, and Bob thinks S is false with probability q. Assuming p+q>1, Alice should put in d(p^2-(1-q)^2), while Bob should put in d(q^2-(1-p)^2). I suggest you use this algorithm next time you want to have a friendly wager (with a rational person), and I suggest you set d to 25 dollars and require both players to say an odd integer percent to ensure a whole number of cents.

## Comments (45)

BestA less complex way of making bets on probabilities, which I learned at a finance internship, is for one player to "make a market" on the outcome of interest--that is, offer to buy or sell contracts whose value settles to, e.g., $1 if the outcome is true and $0 if the outcome is false. If you make a market with a "spread"--e.g. buy at .40, sell at .60--then you can safely name your probability, as the counterparty in this anecdote asks you to do, without giving up a large advantage. For example, at my internship one common thing was precommitting to trade on any market of a given spread.

This has the advantage of being easier to remember and allowing multiple parties, but the disadvantage of (probably, I think) being subject to gaming and market dynamics which distract from the probabilities (but are interesting in their own right).

I don't think it is too difficult to remember. You put in the square of probability you think you're correct minus the square of probability he thinks you are correct all times 25. He uses the same algorithm.

*1 point [-]A: 60% confidence B: 30% confidence

My intent is to demonstrate that, while the above is probably incorrect,

is not an adequate explanation to remember and get the right result out of, because the calculations I specified above

aremy genuine interpretation of your statements.(this problem persists for every value of p and q, whether they total to above 1 or not)

*4 points [-]Somebody replied with an explanation of how I was basically omitting the relativization of 'you' when considering what values to use.

That is, B should bet according to his confidence that

heis correct, which in my case would be 70%..Since you know it's wrong, how about you try again to make sense of it?

http://lesswrong.com/lw/jgv/even_odds/ad6y

*0 points [-]Neither probability should be <50%, you take the probability that your opinion is the right one, not whether the proposition is true or false.

In your example B would be betting

againsthis beliefs, thus the negative result.The right calculation: A = 0.6 B = 0.7

Edit:

actually, it's sufficient that A and B sum to over 1. Since you can always negate the condition, the right calculation here is:

Also, apparently I can't use the retract button the way I wanted to use it.

*6 points [-]I've generalized the Even Odds Algorithm to any number of people and any number of alternatives that they want to bet on. Here is the link http://bywayofcontradiction.com/?p=118

*4 points [-]A simpler generalization:

Assume there are

npeople. LetS_ibe personi's score for the event in question from your favorite proper scoring rule. Then let the total payment to personibe(i.e. the person's score minus the average score of everyone else). If

T_iis negative, that's a payment that person has to make.This scheme is always strategyproof and budget-balanced. If the scoring rule is bounded, then the payment is bounded. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same payment (fair by your definition).

Or in the language this comment uses, each player divides the square of their surprise equally among all the other players.

*6 points [-]A worked example:

Your friend says "It's totally gonna work this time". You say "uh, no it's not". She says "Okay, maybe not but like 90% yes". You say "Bet, it's only like 1 in 4," putting down $25 * (.5625-.0081) = $13.86. She puts down her $25 * (0.8281-.0625) = $19.14. She attempts, smug in her expected 0.91 * $13.86 - 0.09 * $19.14 = $10.89, while you wait arms folded for your expected 0.75 * $19.14 - 0.25 * $13.86 = $10.89.

Sweet! In practice I would allow any percent estimate, loser gets to keep fractional cents as a humorous consolation prize, and obviously always settle up over PayPal or successive games of zero $EV high card to even up to whatever denominations the loser has on hand, because using pennies lowers both participants $EV to negative infinity.

This doesn't punish estimates like 99.999% when the estimate should be more like 99%, unless you rescale and allow much higher than $25 bets.

*4 points [-]Here is another attempt to present the same algorithm, with the goal of making it easier to memorize:

"Each puts in the square of their surprise, then swap."

To spell this out, I predict that some event will happen with probability 0.1, you say it is 0.25. When it happens, I am 0.9 surprised and you are only 0.75 surprised. So I put down (0.9)^2 * D, you put down (0.75)^2 * D, and we swap our piles of money. Since I was more surprised, I come out the loser on the deal.

"Square of the surprise" is a quantity commonly used to measure the failure rate of predicative agents in machine learning; it is also known as Brier score. So we could describe this rule as "each bettor pays the other his or her Brier score." There was some discussion of the merits of various scoring systems in an earlier post of Coscott's.

I thought it was very interesting that my natural assumptions lead to a Brier score like system rather than Bayes score. I really don't think Bayesianists respect Brier score enough.

I thought it was interesting too. As far as I can tell, your result is special to the situation of two bettors and two events. The description I gave describes a betting method when there are more than two alternatives, and that method is strategy proof, but it is not fair, and I can't find a fair version of it.

I am really stumped about what to do when there are three people and a binary question. Naive approaches give no money to the person with the median opinion.

*2 points [-]I wrote up an answer to this here http://bywayofcontradiction.com/?p=118

You could just do all three pairwise bets. That will not be fair, since not everyone participates in all bets. The middle man might just be guaranteed to make money though. (for some probabilities)

Assume p=0.5, q=0.8. Alice should bet d( (1-0.8)^2 - 0.5^2 ) = -0.21d. Am I missing something here?

You are correct. There was a sign error that I didnt notice because the answer looked so similar to the one I knew was right from doing it before Thanks! It is corrected now

This is great, thank you!

I like this idea. As you say, it's not the only way to define it but it does seem like a very reasonable way. The two players have come upon a situation which seems profitable to both of them and they simply agree to "split the profit".

*9 points [-]In order to give the players incentives to be honest, the algorithm seems to "use up" some of the total potential profit. For example, in the OP, the players are instructed to bet $2.72 and $13.28 when each was actually willing to bet up to $25. I think this also means that this method of coming up with bet amounts is not strategy proof if players are able to lie about their maximum bet amounts.

Yes. That is correct. It is only strategy proof if the max bet amount is fixed.

As for the used up potential, that is common in mechanism design. It is very common to have to choose between Pareto optimality and strategy proof.

*4 points [-]A further problem with this mechanism: since the potential profit isn't actually used up before the bet is resolved (i.e., each player still has the money they didn't put on the table), the players can always make another bet with each other using their remaining money. But this knowledge changes the players' incentives when they're choosing the probabilities to output.

I'm guessing that in order to design a mechanism that is actually strategy proof, we'd need to make the cost more explicit, for example by having a third party collect some sort of fee based on the expressed probabilities.

From the OP:

The players had money left over. They only put in 16 of 50 dollars.

I thought the rapidity with which the third person was described to have taken the funds in escrow and declared, "You wait" indicated that she (and you) had seen the problem pointed out by Wei_Dai and moved to block it. (Silly me?)

Here I thought it was just a joke about someone stealing the stakes.

I wasn't trying to indicate anything with the third person. Sorry for the confusion. I just thought that it would be hard to end the story without describing where the money goes, since the prediction I was referencing was not going to be answered for a year.

What's wrong with just using this algorithm to establish ratios between bets, then scaling up to meet whichever limit is hit first?

In your example, it'd be scaled up to 5.12 against 25.

That is not strategy proof

*5 points [-]Oh, I see. You probably already understood that, but I'll write it up for anyone else who didn't initially grok the process (like me).

Intuitively, the original algorithm incentivises people to post their true estimates by scaling up the opponents investment with your given odds, so that it doesnt pay for you to artificially lower your estimate. The possible wins will be much lower; disproportionately to your investment, if you underestimate your odds. Conversely, the possible losses will not be covered by increased wins if you overestimate your chances.

It does not work if you scale the bets. If A believes he wins the bet half the time, and B believes it will be 90%, with the assumption of B being honest and both players setting the limit at 1 (for ease of calculation):

With A declaring 50%, the investment ratios would be:

With the original amount calculation that gives the expected value of

Whereas with scaled up bets A puts in 0.43 while B gives 1:

With A declaring 20%, the numbers are:

While with scaled bets (B = 1, A = 0.18)

Note how E(A) goes down if A lies, but E'(A) went way up.

*2 points [-]At first reading I got lost here -- why would I say that this is crazy or that the bet is not fair if I expect to gain (and lose) 0 dollars? Is the reason that my opponent would expect (according to their estimation of probabilities of events) to gain from the trade, and it's not fair for all the gains to be on one side?

That is exactly right.

Thanks!

*2 points [-]This is indeed great. I've already set it up in a betting thread I run - I'll report back with how well it works.

Note, if p > 1-q, then switch the sense of the bet so you're working with 1-p and q instead. Or tolerate having negative wagers as being payouts instead of wagered amounts

ETA: uh, yes, I meant p < 1-q

Excited to see your report.

You meant to say if p<1-q, and yes the two interpretations give the same results. The mechanism also results in no bet at all when the probabilities are the same. I did not have to make any assumptions on the probabilities, but I did for the sake of producing positive bets.

I have used it in two places. In both places, it has created some slight confusion.

In one place, two prominent users said they are dissatisfied with it and not going to use it again. This was because the bet ended up much smaller than they originally thought. It is not popular there.

I changed the rule so if you declared a probability and a max bet, the scale is set so that the largest possible bet you could end up making (if someone puts down the 100% further away from your probability) is what you declared as the max bet, instead of using the max bet as the scale directly. That would have solved the problem... but by that point the damage was done.

In the other place, I got a dozen participants to give their probability distributions for an outcome, using a pool rule which is probably equivalent to the rule for pools given elsewhere in this thread, but for more obvious reasons (round robin of 1-on-1 bets on just the observed outcome, vs all other participants). Apparently, the bandwagon jumped onto was to try it rather than the reverse. They might end up being disappointed by the small size of the results too.

Ah, didn't catch that: "Without loss of generality, assume p+q>1."

*1 point [-]I'm confused...if the prediction does not come true (which you estimated as being 33 percent likely) you only gain $2.72? and if the most probable outcome does come true you lose 13.28?

Yep, I put the numbers in backwards (because of the sign mistake I made originally.) I fixed it now. Thanks for the correction.

What if Alice is willing to bet up to d dollars, but Bob is willing to bet up to e dollars?

For what it's worth, I don't care that much about fairness. If I have to give it up to make it strategy-proof against lying about the relative amounts of money, that's fine.

*2 points [-]Take the minimum. If you require that both parties are required to have the amount of money they are willing to bet, then taking the minimum and then running the algorithm is strategy proof and fair. The optimal strategy is to put in as much money as you can, then report honest preferences.

Yeah, but you might be able to find a better strategy. Perhaps you can get it so Alice and Bob both have a higher expected benefit. Perhaps you can make Alice only go down slightly, but Bob go up significantly.

I think I can prove that you cannot increase the expected benefit for both without knowing something about their probabilities ahead of time. You can sacrifice fairness, but I do not think Alice can go up much more than Bob goes down, and I do not want to.

I feel like if Bob has enough money, he ought to just be able to raise the stakes if he has enough money. In your example, if Alice is 50% sure, then she will never bet more than a quarter of her money, regardless of how certain and rich Bob is.

Even if I do have to let Bob go down a little, I still feel like it could be worth it. I don't know before-hand if I'm going to be Alice or Bob, so it's automatically fair. I want to maximize the sum of their expected profits.