The probability should be given as 0.03 -- that might reduce your confusion!
Aha! Yes, that explains a lot.
I'm now curious if there's any meaning to the result I got. Like, "how much should I pay to insure against an event that happens with 300% probability" is a wrong question. But if we take the Kelly formula and plug in 300% for the probability we get some answer, and I'm wondering if that answer has any meaning.
I disagree. Kelly instructs us to choose the course of action that maximises log-wealth in period t+1 assuming a particular joint distribution of outcomes. This course of action can by all means be a complicated portfolio of simultaneous bets.
But when simultaneous bets are possible, the way to maximize expected log wealth won't generally be "bet the same amounts you would have done if the bets had come one at a time" (that's not even well specified as written), so you won't be using the Kelly formula.
(You can argue that this is still, somehow, Kelly. But then I'd ask "what do you mean when you say this is what Kelly instructs? Is this different from simply maximizing expected log wealth? If not, why are we talking about Kelly at all instead of talking about expected log wealth?")
It's not just that "the insurance calculator does not offer you the interface" to handle simultaneous bets. You claim that there's a specific mathematical relationship we can use to determine if insurance is worth it; and then you write down a mathematical formula and say that insurance is worth it if the result is positive. But this is the wrong formula to use when bets are offered simultaneously, which in the case of insurance they are.
This is where reinsurance and other non-traditional instruments of risk trading enter the picture.
I don't think so? Like, in real world insurance they're obviously important. (As I understand it, another important factor in some jurisdictions is "governments subsidize flood insurance.") But the point I was making, that I stand behind, is
If one donates one's winnings then one's bets no longer compound and the expected profit is a better guide then expected log wealth -- we agree.
(This isn't a point I was trying to make and I tentatively disagree with it, but probably not worth going into.)
Whether or not to get insurance should have nothing to do with what makes one sleep – again, it is a mathematical decision with a correct answer.
I'm not sure how far in your cheek your tongue was, but I claim this is obviously wrong and I can elaborate if you weren't kidding.
I'm confused by the calculator. I enter wealth 10,000; premium 5,000; probability 3; cost 2,500; and deductible 0. I think that means: I should pay $5000 to get insurance. 97% of the time, it doesn't pay out and I'm down $5000. 3% of the time, a bad thing happens, and instead of paying $2500 I instead pay $0, but I'm still down $2500. That's clearly not right. (I should never put more than 3% of my net worth on a bet that pays out 3% of the time, according to Kelly.) Not sure if the calculator is wrong or I misunderstand these numbers.
Kelly is derived under a framework that assumes bets are offered one at a time. With insurance, some of my wealth is tied up for a period of time. That changes which bets I should accept. For small fractions of my net worth and small numbers of bets that's probably not a big deal, but I think it's at least worth acknowledging. (This is the only attempt I'm aware of to add simultaneous bets to the Kelly framework, and I haven't read it closely enough to understand it. But there might be others.)
There's a related practical problem that a significant fraction of my wealth is in pensions that I'm not allowed to access for 30+ years. That's going to affect what bets I can take, and what bets I ought to take.
The reason all this works is that the insurance company has way more money than we do. ...
I hadn't thought of it this way before, but it feels like a useful framing.
But I do note that, there are theoretical reasons to expect flood insurance to be harder to get than fire insurance. If you get caught in a flood your whole neighborhood probably does too, but if your house catches fire it's likely just you and maybe a handful of others. I think you need to go outside the Kelly framework to explain this.
I have a hobby horse that I think people misunderstand the justifications for Kelly, and my sense is that you do too (though I haven't read your more detailed article about it), but it's not really relevant to this article.
I think the thesis is not "honesty reduces predictability" but "certain formalities, which preclude honesty, increase predictability".
I kinda like this post, and I think it's pointing at something worth keeping in mind. But I don't think the thesis is very clear or very well argued, and I currently have it at -1 in the 2023 review.
Some concrete things.
Man, this review kinda feels like... I can imagine myself looking back at it two years later and being like "oh geez that wasn't a serious attempt to actually engage with the post, it was just point scoring". I don't think that's what's happening, and that's just pattern matching on the structure or something? But I also think that if it was, it wouldn't necessarily feel like it to me now?
It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that's true.
In any case, I'm pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.
Ooh, I didn't see the read filter. (I think I'd have been more likely to if that were separated from the tabs. Maybe like, [Read] | [AI 200] [World Modeling 83] [Rationality 78] ...
.) With that off it's up to 392 nominated, though still neither of the ones mentioned. Quick review is now down to 193, my current guess is that's "posts that got through to this phase that haven't been reviewed yet"?
Screenshot with the filter off:
and some that only have one positive review:
Btw, I'm kinda confused by the current review page. A tooltip on advanced voting says
54 have received at least one Nomination Vote
Posts need at least 2 Nomination Votes to proceed to the Review Phase
And indeed there are 54 posts listed and they all have at least one positive vote. But I'm pretty sure this and this both had at least one (probably exactly one) positive vote at the end of the nomination phase and they aren't listed.
Guess: this is actually listing posts which had at least two positive votes at the end of the nomination phase; the posts with only one right now had two at the time?
...but since I started writing this comment, the 54 has gone up to 56, so there must be some way for posts to join it, but I don't have a guess what it would be.
And then the quick review tab lists 194 posts. I'm not sure what the criteria for being included on it is. It seems I can review and vote on each of them, where I can't do that for the two previous posts, so again there must be some criteria but I don't have a guess what.
I think it's good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven't fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that.
I don't think it's a good fit for the 2023 review. Unless Ziz gets back in the news, there's not much reason for someone in 2025 or later to be reading this.
If I was going to recommend it, I think the reason would be some combination of
But I don't think it stands out as a case study (it's not trying to answer questions like "how did this person become Ziz"), and I weakly guess it doesn't stand out as investigative journalism either. E.g. when I'm thinking on these axes, TracingWoodgrains on David Gerard feels like the kind of thing I'd recommend above this.
Which, to be clear, not a slight on this post! I think it does what it wanted to do very well, and what it wants to do is valuable, it's just not a kind of thing that I think the 2023 review is looking to reward.
Self review: I really like this post. Combined with the previous one (from 2022), it feels to me like "lots of people are confused about Kelly betting and linear/log utility of money, and this deconfuses the issue using arguments I hadn't seen before (and still haven't seen elsewhere)". It feels like small-but-real intellectual progress. It still feels right to me, and I still point people at this when I want to explain how I think about Kelly.
That's my inside view. I don't know how to square that with the relative lack of attention the post got, and it feels weird to be writing it given that fact, but oh well. There are various stories I could tell: maybe people were less confused than I thought; maybe my explanation is unclear; maybe I'm still wrong on the object level; maybe people just don't care very much; maybe it just happened not to get seen.
If I were writing this today, my guess is:
(At least in the UK, numbers starting 077009 are never assigned. So I've memorized a fake phone number that looks real, that I sometimes give out with no risk of accidentally giving a real phone number.)
Ah, my "what do you mean" may have been unclear. I think you took it as, like, "what is the thing that Kelly instructs?" But what I meant is "why do you mean when you say that Kelly instructs this?" Like, what is this "Kelly" and why do we care what it says?
That said, I do agree this is a broadly reasonable thing to be doing. I just wouldn't use the word "Kelly", I'd talk about "maximizing expected log money".
But it's not what you're doing in the post. In the post, you say "this is how to mathematically determine if you should buy insurance". But the formula you give assumes bets come one at a time, even though that doesn't describe insurance.