While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]
For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.
But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.
And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to their face that you hate them and everything they stand for. This would be virtuously legible and non-deceptive, at the cost of immediately ending the conversation and thus forfeiting any chance of e.g. gains from trade, coming to a compromise, etc.
One way I've seen this cost manifest on LW is that some authors complain that there's a style of commenting here that makes it unenjoyable to post here as an author. As a result, those authors are incentivized to post less, or to post elsewhere.[2]
And as a final aside, I'm skeptical of treating Feynman as socially graceless. Maybe he was less deferential towards authority figures, but if he had told nothing but the truth to all the authority figures (who likely included some naked emperors) throughout his life, his career would've presumably ended long before he could've gotten his Nobel Prize. And b), IIRC the man's physics lectures are just really fun to watch, and I'm pretty confident that a sufficiently socially graceless person would not make for a good teacher. For example, it is socially graceful not to belittle fledgling students as intellectual inferiors, even though they in some ways are just that.
Related: I wrote this comment and this follow-up where I wished that Brevity was considered a rationalist virtue. Because if there's no counterbalancing virtue to trade off against other virtues like legibility and truth-seeking, then supposedly virtuous discussions are incentivized to become arbitrarily long.
The moderation log of users banned by other users is a decent proxy for the question of which authors have considered which commenters to be too costly to interact with, whether due to lack of social grace of something else.
I'm glad you survived a real danger to your life, and major kudos for writing up your experience!
Regarding this essay, I expected to upvote it based on the title alone. But having read it, its particular advice feels weak to me and sounds more like a general exhortation to Be Vigilant (or paranoid) of X, which isn't at all sustainable in a world full of X's one could Be Vigilant about. So it seems to me that a stronger version of such an essay almost must be rooted in base rates or something.
The kind of structure I'd expect would look more like: brainstorm or LLM-generate a list of "in my environment, what things could kill me". Then guesstimate or google likelihoods for those, then brainstorm or look up or LLM-generate countermeasures for these threats, etc., and finally land on a list of top threats & suggested efficient countermeasures. Plus an understanding that one cannot drive all risks down to zero.
Finally, such a list should probably also consider cryonics (as a way to kind-of-survive many otherwise unpreventable causes of death), as well as non-individual risks of death like war, pandemics, or x-risks.
I wouldn't trust Perplexity Pro's percentage numbers one bit. It likes to insert random percentage numbers into my answers and they have hardly any bearing to reality at all. When I challenged it on this point, it claimed these reflected percentages of search results (e.g. in this scenario, 20 search results with 17 featuring Claude would result in an answer of 85%), but even that wasn't remotely correct. For now I assume these are entirely hallucinated/made up, unless strongly proven otherwise. It's certainly not doing any plausible math on any plausible data, from what I can tell.
This is part of a more general pattern wherein Perplexity for me tends to be extremely confident or intent on being useful, even in situations when it has no way to actually be useful given its capabilities, and so it just makes stuff up.
Maybe the idea is that if a spanner is thrown in the works, you can't necessarily have someone else unthrow that spanner?
Idle suggestion, probably not useful: have you checked if you can do what you want by using GreaterWrong instead?
I think OP's perspective is valid, and I'm not at all convinced by your reply. We're currently racing towards technological extinction with the utmost efficiency, to the point that it's hard to imagine that any arbitrary alternative system of economics or governance could be worse by that metric, if only by virtue of producing less economic growth. I don't see how nuclear warfare results in extinction, either; to my understanding it's merely a global catastrophic risk, but not an existential one. And regarding your final paragraph, there are a lot of orders of magnitude between a system of governance that self-destructs in <10k years, vs. one that eventually succumbs to the Heat Death of the universe.
Anyway, I made similar comments as OP in a doomy comment from last year:
In a world where technological extinction is possible, tons of our virtues become vices:
- Freedom: we appreciate freedoms like economic freedom, political freedom, and intellectual freedom. But that also means freedom to (economically, politically, scientifically) contribute to technological extinction. Like, I would not want to live in a global tyranny, but I can at least imagine how a global tyranny could in principle prevent AGI doom, namely by severely and globally restricting many freedoms. (Conversely, without these freedoms, maybe the tyrant wouldn't learn about technological extinction in the first place.)
- Democracy: politicians care about what the voters care about. But to avert extinction you need to make that a top priority, ideally priority number 1, which it can never be: no voter has ever gone extinct, so why should they care?
- Egalitarianism: resulted in IQ denialism; if discourse around intelligence was less insane, that would help discussion of superintelligence.
- Cosmopolitanism: resulted in pro-immigration and pro-asylum policy, which in turn precipitated both a global anti-immigration and an anti-elite backlash.
- Economic growth: the more the better; results in rising living standards and makes people healthier and happier... right until the point of technological extinction.
- Technological progress: I've used a computer, and played video games, all my life. So I cheered for faster tech, faster CPUs, faster GPUs. Now the GPUs that powered my games instead speed us up towards technological extinction. Oops.
Your comment runs counter to the OP's claim in the bottom section called Mennonites Are Susceptible To Facts and Logic, When Presented In Low German. E.g. the anecdote about the woman who thought the hospital turned her away, sounds like it's not about vaccine hesitancy but about total inability to communicate.
And sure, human doctors and nurses who know Obscure Language are a much better solution than LLM doctors and nurses, but realistically the former basically don't exist, so...
I'll accept time-sensitive stuff as a valid counterargument to my claim, as well as e.g. things moving beyond the observable universe.
But I don't see how the existence of the moons of Neptune works as a counterargument. The whole point is that you do something laborious to gain/accumulate/generate new knowledge (like send a space probe). And then to verify/confirm said knowledge, you don't have to send a new space probe because you can use a gazillion other cheaper methods to confirm the knowledge instead (like by pointing telescopes at the moons, or by using your improved knowledge of physical law to predict their positions, etc. etc.).
If the claim is just "producing the exact same kind of evidence (space probe pictures) can require the same cost", then I don't exactly disagree, I just don't see how that's at all relevant. The AI context here is that we have a superhuman mind that can generate knowledge we can't (the space probe or its pictures), and the question is whether it can convert that knowledge into a form we'd have a much easier time understanding. In that situation, why would it matter that we can't build a second space probe?
This is not what you're asking for, but are you aware of Pantheon (2022) (Wikipedia, LW thread)? It's a short animated TV series (16 episodes over 2 seasons, canceled / cut short) about mind uploads and related topics. It features several of the things you want, but also some weird stuff like superhero-esque fights between uploads. And while the ending of the final episode is quite bombastically sci-fi, it also makes it very clear that the series was cut short.
I personally first deeply felt the sense of "I'm doomed, I'm going to die soon" almost exactly a year ago, due to a mix of illness and AI news. It was a double-whammy of getting both my mortality, and AGI doom, for the very first time.
Re: mortality, it felt like I'd been immortal up to that point, or more accurately a-mortal or non-mortal or something. Up to this point I hadn't anticipated death happening to me as anything more than a theoretical exercise. I was 35, felt reasonably healthy, was familiar with transhumanism, had barely witnessed any deaths in the family, etc. I didn't feel like a mortal being that can die very easily, but more like some permanently existing observer watching a livestream of my life: it's easy to imagine turning off the livestream, but much harder to imagine that I, the observer, will eventually turn off.
After I felt like I'd suddenly become mortal, I experienced panic attacks for months.
Re: AGI doom: even though I've thought way less about this topic than you, I do want to challenge this part:
Just as I felt non-mortal because of an anticipated transhumanist future or something, so too did it feel like the world was not doomed, until one day it was. But did the probability of doom suddenly jump to >99% in the last few years, or was the doom always the default outcome and we were just wrong to expect anything else? Was our glorious transhumanist future taken from us, or was it merely a fantasy, and the default outcome was always technological extinction?
Are we in a timeline where a few actions by key players doomed us, or was near-term doom always the default overdetermined outcome? Suppose we go back to the founding of LessWrong in 2009, or the founding of OpenAI in 2015. Would a simple change, like OpenAI not being founded, actually meaningfully change the certainty of doom, or would it have only affected the timeline by a few years? (That said, I should stress that I don't absolve anyone who dooms us in this timeline from their responsibility.)
From my standpoint now in 2025, AGI doom seems overdetermined for a number of reasons, like:
Yudkowsky had a glowfic story about how dath ilan prevents AGI doom, and that requires a whole bunch of things to fundamentally diverge from our world. Like a much smaller population; an average IQ beyond genius-level; fantastically competent institutions; a world government; a global conspiracy to slow down compute progress; a global conspiracy to work on AI alignment; etc.
I can imagine such a world to not blow itself up. But even if you could've slightly tweaked our starting conditions from a few years or decades ago, weren't we going to blow ourselves up anyway?
And if doom is sufficiently overdetermined, then the future we grieve for, transhumanist or otherwise, was only ever a mirage.