Many people appreciated my Open Asteroid Impact startup/website/launch/joke/satire from last year. People here might also enjoy my self-exegesis of OAI, where I tried my best to unpack every Easter egg or inside-joke you might've spotted, and then some.
Another interesting subtlety the post discusses is that while the intro sets up "We live in the safest era in human history, yet we're more terrified of death than ever before," there's a plausible case for causality in the other direction. That is, it's possible that because we live in a safe era, we err more on the side of avoiding death.
(btw this post refreshed on me like 5 times while making this comment so it took a lot more effort to type out than i'm accustomed to, unclear if it's a client-side issue or a problem with LW).
I do think there's something real to it. I agree that having less laissez faire childrearing practices probably directly resulted in a lower childhood accidental death rate. The main thesis of the most is that people care a lot more about living longer than they used to, and take much stronger efforts to avoid death than they used to. So things that look like irrational risk-aversion compared to historical practices are actually a rational side-effect of having greater premium of life and making (intuitively/on average/at scale) rational cost-benefits analyses that gave different answers than the past.
Yeah maybe I didn't frame the question well. I think there are a lot of good arguments for why it should be superlinear but a) the degree of superlinearity might be surprising and b) even if people at some level intellectually know this is true, it's largely not accounted for in our discourse (which is why for any specific thing that can be explained by an increasing premium-of-life people often go to thing-specific explanations, like greedy pharma companies or regulatory bloat or AMA for healthcare, or elite preference cascades for covid, or overzealous tiger parents for not letting their kids play in forests, etc).
I agree re: healthcare costs, Hall and Jones presents a formal model for why substantially increased healthcare spending might be rational; I briefly cover the model in the substack post.
This to me is one of those Hofstradterian arguments that sounds (and is) very clever and definitely is logically possible but doesn't seem very likely to me when you look at it numerically. Not an expert, but as I understand it intra-bee communication still has many more bits than inter-bee communication, even among the most eusocial ones. So bees are much closer to comrades working together for a shared goal than individual cells in a human body, in terms of their individuality.
I'd like to finetune or (maybe more realistically) prompt engineer a frontier LLM imitate me. Ideally not just stylistically but reason like me, drop anecodtes like me, etc, so it performs at like my 20th percentile of usefulness/insightfulness etc.
Is there a standard setup for this?
Examples of use cases include receive an email and send[1] a reply that sounds like me (rather than a generic email), read Google Docs or EA Forum posts and give relevant comments/replies, etc
More concretely, things I do that I think current generation LLMs are in theory more than capable of:
the computer use part is not essential, I don't need it to be fully automated, i'm not even sure I want to give Opus et al access to my email account anyway.
Would be nice if you can get a warm intro for the book to someone high up in the Vatican too, as well as other potentially influential groups.
(Politics)
If I had a nickel for every time the corrupt leader of a fading nuclear superpower and his powerful, sociopathic and completely unelected henchman leader of a shadow government organization had an extreme and very public falling out with world-shaking implications, and this happened in June, I'd have two nickels.
Which isn't a lot of money, but it's kinda weird that it happened twice.
Yeah it's something I don't want to do a deep-dive in, but if you do, happy to read and signal-boost it!