I like this a lot. I would also like to hear a post-mortem from the winner in a lot of cases, although of course it's kind of silly to impose it. But I do sometimes see the winner and the loser agree that the bet turned out to be operationalized wrong -- that they didn't end up betting on the thing they thought they were betting on. I'd like to know whether the winner thinks they won the spirit of the bet, as well as the letter.
Curated.
This seems like a quite obvious idea in retrospect. I haven't yet thought through whether it's something you should always be doing when you're betting, but it certainly seems like a good tool to have in the rationalist-culture-toolkit.
Yeah, this seems great to me.
It does seem like a fair bit of time, people might just say "well, I got unlucky, but my models are the same, and, I dunno I guess I slightly adjusted the weights of my model?". The more interesting thing is when you make a bet where a negative outcome should force a large update.
Interesting; it's similar to if you make a calculated bet in poker when the odds are in your favor but still lose. In that case, your decision was still correct as well as the means you used to arrive at your decision. So there wouldn't be much to write about. Perhaps in this case the loser could write about why they think the winner actually made the wrong decision to continue playing the hand.
"The more interesting thing is when you make a bet where a negative outcome should force a large update."
I think that's what odds are for. If you're convinced (incorrectly) that something is very unlikely, you should be willing to give large odds. You can't really say "I thought this was 40% likely, and I happened to get it wrong" if you gave 5:1 odds initially.
(And on the other side, the person who took the bet should absolutely say they are making a small update towards the other model, because it's far weaker evidence for them.)
Sorta, but you might have 50:50 odds with a very large spread (both people are very confident in their side) or with a very small spread. So it might be helpful to record that.
Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.
This approach might lead to over updating on single bets. You'd need to record your bets, and the odds on those bets over time to see how calibrated you were. If your calibration over time is poor, then you should be updating your model. Perhaps we can weaken the suggestion in the post to writing a post-mortem on why you may be wrong. Then when you reflect over multiple bets over time, you could try to tease out common patterns and deficits in your model making.
"In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want."
Perhaps I'm missing something, but I think that's exactly what we want. It leads to eventual consistency / improved estimates of odds, which is all we can look for without oracles or in the presence of noise.
First, strength of priors will limit the size of the bettor's updates. Let's say we both used beta distributions, and had weak beliefs. Your prior was Beta(4,6), and mine was Beta(6,4). These get updated to B(5,6) and B(7,4). That sounds fine - you weren't very sure initially, and you still won't over-correct much. If the priors are stronger, say, B(12,18) and B(18,12), the updates are smaller as well, as they should be given our clearer world models and less willingness to abandon them due to weak evidence.
Second, we can look at the outside observer's ability to update. If the expectation is 40% vs. 60%, unless there are very strong priors, I would assume neither side is interested in making huge bets, or giving large odds - that is, if this bet would happen at all, given transaction costs, etc. This should implicitly limit the size of the update other people make from such bets.
Another idea on this: both sides could do pre-mortems, "if I lose, ...". They could look back at this when doing post-mortems. Obviously this increases the effort involved.
Yeah, pre-mortem is another name for pre-hindsight, and murphyjitsu is just the idea of alternating between making pre-mortems and fixing your plans to prevent whatever problem you envisioned in the pre-mortem.
Thinking about this makes me think people should record not just their bets, but the probabilities. If I think the probability is 1% and you think it's 99%, then one of us is going to make a fairly big update. If you think it's 60% and I think it's 50%, yeah, not so much. As a rough rule of thumb, anyway. (Obviously I could be super confident in a 1% estimate in a similar way to how you describe being super confident in a 40%.)
But OTOH I think in many cases, by the time the bet is resolved there will also be a lot of other relevant evidence which determines questions related to a bet. So the warranted update will actually be much larger than would be justified just the one piece of information. In other words, if two Bayesians have different world-models and make a bet about something much into the future, by the time the actual bet is resolved they'll often have seen much more decisive evidence deciding between the two models (not necessarily in the same direction as the bet gets decided).
Still, yeah, I agree with your concern.
Like this idea. This approach can be seen in hedge funds as well. An analyst makes a bet on how/why a stock will perform a certain way and places a corresponding monetary position. The best analysts will take the extra step and conduct a postmortem if they lose money OR if they make money but not for the reasons that they had previously outlined.
Betting money is a useful way to
However, I recently made a bet with both a monetary component and the stipulation that the loser write at least 500 words to a group chat about why they were wrong. I like this idea because:
Furthermore, if the loser's write-up is anything short of honest praise for the winner's views, the write-up may provide hints at a continuing disagreement between the loser and winner which can lead to another bet.
This idea feels similar to Ben's Share Models, Not Beliefs. Bets focus only on disagreement with probabilities, not the underlying reasons for those disagreements. Declaring a winner/loser conveys binary information about who was more correct, but this is very little information. Post-mortems give a place for the models to be brought to light.
A group of people who engaged in betting-with-post-mortems together would generally be getting a lot more feedback on practical reasoning and where it can go wrong.