Depends significantly on where you live! I don't worry about hurricanes, floods, earthquakes, etc.
Among the things that remain are fire, and my government says the fire services get called to 6000 domestic fires every year. Divided by a population of, say, 5 million households that's a risk of 0.12 % per year. Maybe not all fires get fire services involvement, so we'll bump it up to 0.2 %.
You won't find actuarial tables, but they can often be constructed from official sources and/or press releases with some ingenuity. We'd do this for other risks too, like...
I agree -- sorry about the sloppy wording.
What I tried to say wad that "if you act like someone who maximises compounding money you also act like someone with utility that is log-money."
Your formula is only valid if utility = log($).
This is a synonym for "if money compounds and you want more of it at lower risk". So in a sense, yes, but it seems confusing to phrase it in terms of utility as if the choice was arbitrary and not determined by other constraints.
The insurance company does not have logarithmic discounting on wealth, it will not be using Kelly to allocate bets. From the perspective of the company, it is purely dependent on the direct profitability of the bet - premium minus expected payout and overheads.
Not true. Risk management is a huge part of many types of insurance, and that is about finding the appropriate exposure to a risk -- and this exposure is found through the Kelly criterion.
This matters less in some types of insurance (e.g. life, which has stable long-term rates and rare catastrophi...
Fundamentally we are taking the probability-weighted expectation of log-wealth under all possible outcomes from a single set of actions, and comparing this to all other sets of actions.
The way to work in uncompensated claims is to add another term for that outcome, with the probability that the claim is unpaid and the log of wealth corresponding to both paying that cost out of pocket and fighting the insurance company about it.
It is under no such assumption! If you have sufficient wealth you will leave something even if you die early, by virtue of already having the wealth.
If it's easier, think of it as the child guarding the parent's money and deciding whether to place a hedging bet on their parent's death or not -- using said parent's money. Using the same Kelly formula we'll find there is some parental wealth at which it pays more to let it compound instead of using it to pay for premia.
Even so, at some level of wealth you'll leave more behind by saving up the premium and having your children inherit the compound interest instead. That point is found through the Kelly criterion.
(The Kelly criterion is indeed equal to concave utility, but the insurance company is so wealthy that individual life insurance payouts sit on the nearly linear early part of the utility curve, whereas for most individuals it does not.)
I just wouldn't use the word "Kelly", I'd talk about "maximizing expected log money".
Ah, sure. Dear child has many names. Another common name for it is "the E log X strategy" but that tends to not be as recogniseable to people.
you say "this is how to mathematically determine if you should buy insurance".
Ah, I see your point. That is true. I'd argue this isolated E log X approach is still better than vibes, but I'll think about ways to rephrase to not make such a strong claim.
what do you mean when you say this is what Kelly instructs?
Kelly allocations only require taking actions that maximise the expectation of the joint distribution of log-wealth. It doesn't matter how many bets are used to construct that joint distribution, nor when during the period they were entered.
If you don't know at the start of the period which bets you will enter during the period, you have to make a forecast, as with anything unknown about the future. But this is not a problem within the Kelly optimisation, which assumes the joint distribution of ...
I'm confused by the calculator.
The probability should be given as 0.03 -- that might reduce your confusion!
Kelly is derived under a framework that assumes bets are offered one at a time.
If I understand your point correctly, I disagree. Kelly instructs us to choose the course of action that maximises log-wealth in period t+1 assuming a particular joint distribution of outcomes. This course of action can by all means be a complicated portfolio of simultaneous bets.
Of course, the insurance calculator does not offer you the interface to enter a periodfu...
In A World of Chance, Brenner, Brenner, and Brown look at this same question from a historic perspective, and (IIRC) conclude that gambling is about as damaging as alcohol, both for individuals and society. In other words, it should be legal (it gives the majority a relatively safe good time) but somewhat controlled (some cannot handle it and then it is very bad).
Do these more recent numbers corroborate that comparison to alcohol?
No, the effect size on bankruptcies is about 10x larger than expected. So while offline gambling may be comparable to alcohol, smartphone gambling is in a different category if we trust this research.
Oh, these are good objections. Thanks!
I'm inclined to 180 on the original statements there and instead argue that predictive modelling works because, as Pearl says, "no correlation without causation". Then an important step when basing decisions on predictive modelling is verifying that the intervention has not cut off the causal path we depended on for decision-making.
Do you think that would be closer to the truth?
The Demon King donned a mortal guise, bought shares in “The Demon King will attack the Frozen Fortress”, and then attacked the Frozen Fortress.
I'm curious: didn't the market work exactly as intended here? I mean, it helped them anticipate the Demon King’s next moves – it's not the market's fault that they couldn't convert foresight into operational superiority.
The King effectively sold good information on his battle plans; he voluntarily leaked military secrets against pay. The Citadel does not have to employ a spy network, because the King spies for them. This should be kind of a good deal, right?
However I also do frequently spend more time on close decisions. I think this can be good praxis. It is wasteful in the moment, but going into detail on close decisions is a great way to learn how to make better decisions. So in any decision where it would be great to improve your algorithm, if it is very close, you might want to overthink things for that reason.
In my experience, the more effective way to learn from close decisions is to just pick one alternative and then study the outcome and overthink the choice, rather than deliberate harder before c...
Thanks for taking the time to dive into this. I've spent the past few evenings iterating on a forecasting bot while doing embarrassingly little research myself[1], and it seems like I have stumbled into the same approach as Five Thirty Nine, and my bot has the exact same sort of problems. I'll write more later about why I think some of those problems are not as big as they may seem.
But your article also gave me some ideas that might lead to improvements. Thanks!
[1]: In this case, I prioritise the two weeks in the lab over the hour in the library. I'm doing it not to make a good forecasting bot but to learn the APIs involved.
That is, confounding could go both ways here; the effect could be greater than it appears, rather than less.
Absolutely, but if we assume the null hypothesis until proven otherwise, we will prefer to think of confounding as creating effect that is not there, rather than subduing an even stronger effect.
I'll reanalyse that way and post results, if I remember.
Yes, please do! I suspect (60 % confident maybe?) the effect will still be at least a standard error, but it would be nice to know.
...I made a script run in the background on my PC, something lik
Many of the existing answers seem to confuse model and reality.
In terms of practical prediction of reality, it would be a mistake to emit a 0 or 1, always, because there's always that one-in-a-billion chance that our information is wrong – however vivid it seems at the time. Even if you have secretly looked at the hidden coin and seen clearly that it landed on heads, 99.999 % is a more accurate forecast than 100 %. It could have landed on aardvarks and masqueraded as heads, however unlikely, that is a possibility. Or you confabulated the memory of seeing t...
This analysis suffers from a fairly clear confounder: since you are basing the data on which days you actually listened to music, there might be a common antecedent that both improves your mood and causes you to listen to music. As a silly example, maybe you love shopping for jeans, and clothing stores tend to play music, so your mood will, on average, be better on the days you hear music for this reason alone.
An intention-to-treat approach where you make the random booleans the explainatory variable would be better, as in less biased and suffer less from ...
If Q, then anything follows. (By the Principle of Explosion, a false statement implies anything.) For example, Q implies that I will win $1 billion.
I'm not sure even this is the case.
Maybe there's a more sophisticsted version of this argument, but at this level, we only know the implication Q=>$1M is true, not that $1M is true. If Q is false, the implication being true says nothing about $1M.
But more generally, I agree there's no meaningful difference. I'm in the de Finetti school of probability in that I think it only and always expresses our personal lack of knowledge of facts.
Thanks everyone. I had a great time!
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform
Seeing how the crowd forecast routinely performs at a superhuman level itself, isn't it an unfairly high bar to clear? Not invalidating the rest of your arguments – the methodological problems you point out are really bad – but before asking the question about superhuman performance it makes a lot of sense to fully agree on what superhuman performance really is.
(I also note that a ...
Sure! https://git.sr.ht/~kqr/insurance-calculator