By A. Nobody
When I first posted on LessWrong, I expected some pushback. That’s normal. If you’re arguing that AGI will lead to human extinction and that capitalism makes this outcome inevitable, you’re going to meet resistance. But what I didn’t expect -and what ultimately led me to write this - is the way that resistance has manifested.
From the very beginning, my essays were met with immediate hostility, not on the basis of their logic or premises, but because of vague accusations of them being “political.” This came directly from site admins. And crucially, this wasn’t after reading the content. It was before. The mere idea that someone might be drawing a line from capitalism to extinction was enough to trigger rejection - not intellectual rebuttal, just rejection.
My main essay - arguably the core of the entire argument I’m developing - has been heavily downvoted. Not because it was proven wrong, or because someone pointed out a fatal flaw. But because people didn’t like that the argument existed. There has still not been a single substantive refutation of any of my key premises. Not one. The votes tell you it’s nonsense, but no one is able to explain why.
This isn’t a community failing to find holes in the logic. It’s a community refusing to engage with it at all.
And this mirrors what I’ve seen more broadly. The resistance I’ve received from academia and the AI safety community has been no better. I’ve had emails ignored, responses that amount to “this didn’t come from the right person,” and the occasional reply like this one, from a very prominent member of AI safety:
“Without reading the paper, and just going on your brief description…”
That’s the level of seriousness these ideas are treated with.
Imagine for a moment that an amateur astronomer spots an asteroid on a trajectory to wipe out humanity. He doesn’t have a PhD. He’s not affiliated with NASA. But the evidence is there. And when he contacts the people whose job it is to monitor the skies, they say: “Who are you to discover this?” And then refuse to even look in the direction he’s pointing.
That’s what this is. And it’s not an exaggeration.
I understand institutional resistance. I get that organisations - whether they’re companies, universities, or online communities - don’t like outsiders coming in and telling them they’ve missed something. But this is supposed to be a place that values rational thought. Where ideas live or die based on their reasoning, not on who said them.
Instead, it’s felt like posting to Reddit. The same knee-jerk downvotes. The same smug hand-waving. The same discomfort that someone has written something you don’t like but can’t quite refute.
LessWrong has long had a reputation for being unwelcoming to people who aren’t “in.” I now understand exactly what that means. I came here with ideas. Not dogma, not politics. Just ideas. You don’t have to agree with them. But the way they’ve been received proves something important - not about me, but about the site.
So this will be my last post. I’ll leave the essays up for anyone who wants to read them in the future. I’m not deleting anything. I stand by all of it. And if you’ve made it this far, and actually read what I’ve written rather than reacting to the premise of it, thank you. That’s all I ever wanted - good faith engagement.
The rest of you can go back to not looking up.
- A. Nobody
My previous criticism was aimed at another post of yours, it likely wasn't your main thesis. Some nitpicks I have with it are:
"Developing AGI responsibly requires massive safeguards that reduce performance, making AI less competitive" you could use the same argument for AIs which are "politically correct", but we still choose to take this step, censorsing AIs and harming their performance, thus, it's not impossible for us to make such choices as long as the social pressure is sufficiently high.
"The most reckless companies will outperform the most responsible ones" True in some ways, but most large companies are not all that reckless at all, which is why we are seeing many sequels, remakes, and clones in the entertainment sector. It's also important to note that these incentives have been true for all of human nature, but that they've never mainfested very strongly until recent times. This suggests that that the antidote to Moloch is humanity itself, good faith, good taste and morality, and that these can beat game theoritical problem which are impossible when human beings are purely rational (i.e. inhuman).
We're also assuming that AI becomes useful enough for us to disregard safety, i.e. that AI provides a lot of potential power. So far, this has not been true. AIs do not beat humans, companies are forcing LLMs into products but users did not ask for them. LLMs seem impressive at first, but after you get past the surface you realize that they're somewhat incompetent. Governments won't be playing around with human lives before these AIs provide large enough advantages.
"The moment an AGI can self-improve, it will begin optimizing its own intelligence."
This assumption is interesting, what does "intelligence" mean here? Many seems to just give these LLMS more knowledge and then call them more intelligent, but intelligence and knowledge are different things. Most "improvements" seem to lead to higher efficiency, but that's just them being dumb faster or for cheaper. That said, self-improving intelligence is a dangerous concept.
I have many small objections like this to different parts of the essay, and they do add up, or at least add additional paths to how this could unfold.
I don't think AIs will destroy humanity anytime soon (say, within 40 years). I do think that human extinction is possible, but I think it will be due to other things (like the low birthrate and its economic consequences. Also tech. Tech destroys the world for the same reasons that AIs do, it's just slower).
I think it's best to enjoy the years we have left instead of becoming depressed. I see a lot of people like you torturing themselves with x-risk problems (some people have killed themselves over Roko's basilisk as well). Why not spend time with friends and loved ones?
Extra note: There's no need to tie your identity together with your thesis. I'm the same kind of autistic as you. The futures I envision aren't much better than yours, they're just slightly different, so this is not some psychological cope. People misunderstand me as well, and 70% of the comments I leave across the internet get no engagement at all, not even negative feedback. But it's alright. We can just see problems approaching many years before they're visible to others.