LESSWRONG
LW

Linch
3450Ω2253180
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Linch's Shortform
5y
102
No wikitag contributions to display.
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
Linch14h20

Yeah it's something I don't want to do a deep-dive in, but if you do, happy to read and signal-boost it!

Reply
Linch's Shortform
Linch1d40

Many people appreciated my Open Asteroid Impact startup/website/launch/joke/satire from last year. People here might also enjoy my self-exegesis of OAI, where I tried my best to unpack every Easter egg or inside-joke you might've spotted, and then some. 

Reply
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
Linch1d20

Another interesting subtlety the post discusses is that while the intro sets up "We live in the safest era in human history, yet we're more terrified of death than ever before," there's a plausible case for causality in the other direction. That is, it's possible that because we live in a safe era, we err more on the side of avoiding death. 

Reply
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
Linch1d20

(btw this post refreshed on me like 5 times while making this comment so it took a lot more effort to type out than i'm accustomed to, unclear if it's a client-side issue or a problem with LW).

Reply
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
Linch1d30

I do think there's something real to it. I agree that having less laissez faire childrearing practices probably directly resulted in a lower childhood accidental death rate. The main thesis of the most is that people care a lot more about living longer than they used to, and take much stronger efforts to avoid death than they used to. So things that look like irrational risk-aversion compared to historical practices are actually a rational side-effect of having greater premium of life and making (intuitively/on average/at scale) rational cost-benefits analyses that gave different answers than the past.

Reply
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
Linch2d30

Yeah maybe I didn't frame the question well. I think there are a lot of good arguments for why it should be superlinear but a) the degree of superlinearity might be surprising and b) even if people at some level intellectually know this is true, it's largely not accounted for in our discourse (which is why for any specific thing that can be explained by an increasing premium-of-life people often go to thing-specific explanations, like greedy pharma companies or regulatory bloat or AMA for healthcare, or elite preference cascades for covid, or overzealous tiger parents for not letting their kids play in forests, etc). 

I agree re: healthcare costs, Hall and Jones presents a formal model for why substantially increased healthcare spending might be rational; I briefly cover the model in the substack post.

Reply1
Don't Eat Honey
Linch11d42

This to me is one of those Hofstradterian arguments that sounds (and is) very clever and definitely is logically possible but doesn't seem very likely to me when you look at it numerically. Not an expert, but as I understand it intra-bee communication still has many more bits than inter-bee communication, even among the most eusocial ones. So bees are much closer to comrades working together for a shared goal than individual cells in a human body, in terms of their individuality. 

Reply
Linch's Shortform
Linch12d40

I'd like to finetune or (maybe more realistically) prompt engineer a frontier LLM imitate me. Ideally not just stylistically but reason like me, drop anecodtes like me, etc, so it performs at like my 20th percentile of usefulness/insightfulness etc. 

Is there a standard setup for this?

Examples of use cases include receive an email and send[1] a reply that sounds like me (rather than a generic email), read Google Docs or EA Forum posts and give relevant comments/replies, etc

More concretely, things I do that I think current generation LLMs are in theory more than capable of:

  • Read a Google Doc and identify a subtleish reasoning fallacy the poster made, or one of my pet peeves
  • Read a Forum post and mention some comment thread or post I've written before that addresses the point made.
  • talk about why some email/twitter post/etc relates to one of my ~300 favorite historical facts/my ~top 100 favorite jokes
  • drop in semi-relevant facts about naked mole rats or Sparta etc, ~unprompted
  1. ^

    the computer use part is not essential, I don't need it to be fully automated, i'm not even sure I want to give Opus et al access to my email account anyway.

Reply
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
Linch1mo108

Would be nice if you can get a warm intro for the book to someone high up in the Vatican too, as well as other potentially influential groups.

Reply
Linch's Shortform
Linch1mo143

(Politics)
If I had a nickel for every time the corrupt leader of a fading nuclear superpower and his powerful, sociopathic and completely unelected henchman leader of a shadow government organization had an extreme and very public falling out with world-shaking implications, and this happened in June, I'd have two nickels.

Which isn't a lot of money, but it's kinda weird that it happened twice.

Reply
Load More
9The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
3d
10
35Eating Honey is (Probably) Fine, Actually
10d
0
52My "infohazards small working group" Signal Chat may have encountered minor leaks
3mo
0
36Announcing the Q1 2025 Long-Term Future Fund grant round
7mo
2
64A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps
7mo
2
40Long-Term Future Fund: May 2023 to March 2024 Payout recommendations
1y
0
31[Linkpost] Statement from Scarlett Johansson on OpenAI's use of the "Sky" voice, that was shockingly similar to her own voice.
1y
8
338[April Fools' Day] Introducing Open Asteroid Impact
1y
29
5Linkpost: Francesca v Harvard
2y
5
27EA Infrastructure Fund's Plan to Focus on Principles-First EA
2y
0
Load More