JacekLach
JacekLach has not written any posts yet.

Another common case where you get positive EV insurance is when the cost paid by you and the cost paid by the insurerer, when the bad event happens, are significantly different.
For example, if you get extended phone / device insurance from a manufacturer, when the device fails you would have to pay the retail price for a new device. The manufacturer however only needs to pay the production price, which given margins can be a small fraction of the retail price. Thus the manufacturer can set a premium that is (in expectation) somewhere in between those two prices, and you both benefit.
I don't think your case for how insurance companies make money (Appendix B) makes sense. The insurance company does not have logarithmic discounting on wealth, it will not be using Kelly to allocate bets. From the perspective of the company, it is purely dependent on the direct profitability of the bet - premium minus expected payout and overheads.
Separately, the claim that there is no alternative to Kelly is very weak. I guess you mean there is no formalized, mathematical alternative? Otherwise, I propose a very simple one: buy insurance if the cost of insurance is lower than the disutility of worrying about the bad outcomes covered by insurance. This is the 'vibes... (read more)
Yes, this post was very useful as advice to reverse to me. I think it possible now that one of the biggest problems with how I'm living my life today is optimising too hard for slack.
Low-confidence comment disclaimer; while I've had the concept pretty much nailed down before, I never before thought about it as something you might have too much off. After reading this post I realised that some people do not have enough slack in their life, implying you can choose to have less or more slack, implying it's possible too have too much slack.
I don't have abstract 'this is what too much slack looks like' clearly defined right now,
TBH I strongly disagree with OP's suggestion that 95% reliability is low / bad, at least read literally. I personally definitely fail verbal 'soft commitments' ("I expect this will be done by end of week") with way more than 5% rate; probably more like 20-30. Part of it is being in business where hidden complexity strikes at any time, and estimating is hard; part of it is because of cultural communication norms.
If you ignore soft commitments, then the easy way to improve reliability is to make less hard commitments. Instead of "I'll definitely be there at 9 am sharp", say "I'll do my best to be there at 9 am". Manage expectations.... (read more)
I don't think the goal of OPs proposal is to learn any particular skill. To me it mostly looks like trying to build a tightly-knit group so that each member can use the others as external motivators and close friends to discuss life plans and ideas in detail not really possible between modern colleagues and friends. I.e. the goal is not learning a skill, it's building a mutual support group that actually works.
You're looking at content, not status (as implied by 'knocking someone down a peg'). My immediate reaction to the top-level comment was: "well, they have some good points, but damn are they embarassing themselves with this language". Possibly shaped by me being generally sceptical about the ideas in the OP.
As far as the bet is about the form of the post, rather than the content, I think Duncan's pretty safe.
Do you have examples of systems that reach this kind of reliabilty internally?
Most high-9 systems work by taking lots of low-9 components, and relying on not all of them failing at the same time. I.e. if you have 10 95% systems that fail completely independently, and you only need one of them to work, that gets you like eleven nines (99.9{11}%).
Expecting a person to be 99% reliable is ridiculous. That's like two sick days per year, ignoring all other possible causes of failing to make a task. Instead you should build systems and organisations that have slack, so that one person failing at a particular point in time doesn't make a project/org fail.
The initial argument that convinced you to not eat meat seems very strange to me:
Her: why won’t u eat rabbits? Me: because i had them as pets. i know them too well. they’re like people to me.
This reads to me as: I don't think eating rabbits is immoral, but I have an aesthetic aversion to them because of emotional attachment, rather than moral consideration. Is that not the right reading?
Her: i will get you a pet chicken Me: … Me: omg i’m a vegetarian now :-/
So, you've now built extended your emotional attachment towards rabbits to all animals? Or just the possibly-pettable-ones? But firstly, why do you think that's a good thing?
I guess as an instrumental tactic for "I want to become a vegetarian but can't seem to stick to it", 'imagine your favourite pet, but they're ' might work. But it's surprising that without that initial impetus this worked.
FWIW (a year later) I read the statistic the same way you initially did, but didn't do the comparison. Sorry! Thanks for doing the maths below and in the edit.
What's the vulnerable state you're refering to here? Staying on site?