Epistemic status: personal judgements based on conversations with ~100 people aged 30+ who were worried about AI risk "before it was cool", and observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events.
Summary: There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focussed parts of the EA and rationality communities — which is
- preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI, and
- feeding the formation of small pockets of people with a highly adversarial stance towards the rest of the world (and each other).
[This post is also available on the EA Forum.]
Part 1 — The trauma of being ignored
You — or some of your close friends or colleagues — may have had the experience of fearing AI would eventually pose an existential risk to humanity, and trying to raise this as a concern to mainstream intellectuals and institutions, but being ignored or even scoffed at just for raising it. That sucked. It was not silly to think AI could be a risk to humanity. It can.
I, and around 100 people I know, have had this experience.
Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”
At least 30 people I've known personally have adopted that attitude in a big way, and I estimate many more. In the remainder of this post, I'd like to point out some ways this attitude can turn out to be a mistake.
Part 2 — Forgetting that humanity changes
Basically, as AI progresses, it becomes easier and easier to make the case that it could pose a risk to humanity's existence. When people didn’t listen about AI risks in the past, that happened under certain circumstances, with certain AI capabilities at the forefront and certain public discourse surrounding them. These circumstances have changed, and will continued to change. It may not be getting easier as fast as one would ideally like, but it is getting easier. Like the stock market, it may be hard to predict how and when things will change, but they will.
If one forgets this, one can easily adopt a stance like "mainstream institutions will never care" or "the authorities are useless". I think these stances are often exaggerations of the truth, and if one adopts them, one loses out on the opportunity to engage productively with the rest of humanity as things change.
Part 3 - Reflections on the Fundamental Attribution Error (FAE)
The Fundamental Attribution Error (wiki/Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else's behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change. With a moment's reflection, one can see how the FAE can lead to
- trusting too much — assuming someone would never act against your interests because they didn't the first few times, and also
- trusting too little — assuming someone will never do anything good for you because they were harmful in the past.
The second reaction could be useful for getting out of abusive relationships. The risk of being mistreated over and over by someone is usually not worth the opportunity cost of finding new people to interact with. So, in personal relationships, it can be healthy to just think "screw this" and move on from someone when they don't make a good first (or tenth) impression.
Part 4 — The FAE applied to humanity
If one has had the experience of being dismissed or ignored for expressing a bunch of reasonable arguments about AI risk, it would be easy to assume that humanity (collectively) can never be trusted to take such arguments seriously. But,
- Humanity has changed greatly over the course of history, arguably more than any individual has changed, so it's suspect to assume that humanity, collectively, can never be rallied to take a reasonable action about AI.
- One does not have the opportunity to move on and find a different humanity to relate to. "Screw this humanity who ignores me, I'll just imagine a different humanity and relate to that one instead" is not an effective strategy for dealing with the world.
Part 5 – What, if anything, to do about this
If the above didn't resonate with you, now might be a good place to stop reading :) Maybe this post isn't good advice for you to consider after all.
But if it did resonate, and you're wondering what you may be able to do differently as a result, here are some ideas:
- Try saying something nice and civilized about AI risk that you used to say 5-10 years ago, but which wasn’t well received. Don’t escalate it to something more offensive or aggressive; just try saying the same thing again. Someone new might take interest today, who didn’t care before. This is progress. This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
- Try Googling a few AI-related topics that no one talked about 5-10 years ago to see if today more people are talking about one or more of those topics. Switch up the keywords for synonyms. (Maybe keep a list of search terms you tried so you don't go in circles, and if you really find nothing, you can share the list and write an interesting LessWrong post speculating about why there are no results for it.)
- Ask yourself if you or your friends feel betrayed by the world ignoring your concerns about AI. See if you have a "screw them" feeling about it, and if that feeling might be motivating some of your discussions about AI.
- If someone older tells you "There is nothing you can do to address AI risk, just give up", maybe don't give up. Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can't.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field's most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly "but the something something contra-agentic process" or "activism? are you some kind of terrorist?!"
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I've found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there's no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they're not taken seriously is not a serious attempt, and you'll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it's unclear. It's weird to me that he doesn't produce more content if he's doing this full time.