"Doomer" has become a common term to refer to people with pessimistic views about outcomes from AI. I claim this is not a helpful term on net, and generally will cause people to think less clearly.

Reification of identity + making things tribal

I finally realized today why politics and religion yield such uniquely useless discussions...

I think what religion and politics have in common is that they become part of people's identity, and people can never have a fruitful argument about something that's part of their identity. By definition they're partisan...

More generally, you can have a fruitful discussion about a topic only if it doesn't engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people's identities. But you could in principle have a useful conversation about them with some people. And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn't safely talk about with others.

The most intriguing thing about this theory, if it's right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can't think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible. 

- Paul Graham in Keep Your Identity Small[1]

I think a big risk of the "Doomer" label is it moves something from a "given the arguments and evidence I have, I believe X" into an essential deeper property (or commitment) of a person. It reifies it as an identity. You correspondingly then get people who are "not-Doomers", and more recently, I've heard the term "Foomer" too. 

Because people's beliefs are quite correlated with those immediately around them, those more pessimistic and less pessimistic about AI tend be clusters, meaning it gets easy to point at clusters and make things tribal and political.

I'm in this tribe, they're in that tribe. My tribe is good, that tribe is bad. I think this makes us stupider in various ways:

  • We now start talking about people rather than object-level beliefs/evidence/reason.
    • It's much easier to dismiss people than arguments.
  • We put up social barriers to changing your mind. People don't like to be an odd one out among their friends, and if your friends identify as being/not being Doomers (and perhaps having negative opinions about the other group), there will be psychological resistance to update.
    • I think this is already the case when it comes to P(Doom), that there's social pressure to conform. I regret to say that I've reacted with surprise when someone expressed a P(Doom) different than I expected, in a way that exerted social pressure. I'm trying to do less of that, as I think the evidence/reason is such that reasonable people can reasonably disagree a lot.

"Doomer" is an externally applied label and is often used pejoratively

Looking at the Wikipedia page for Doomer, it's possible the term was first used without any mean-spirited connotation. That said, I think it's currently very reminiscent of "Boomer", a term that's definitely negatively valenced in the memespace[2] these days:

"OK boomer" or "okay boomer" is a catchphrase and internet meme that has been used by Gen-X, Millennials and Gen Z to dismiss or mock attitudes typically associated with baby boomers – people born in the two decades following World War II. – Wikipedia

Not exactly surprising, but on Twitter you'll see a lot of this usage.

source
source

Also, my sense is it's much less common for people who meet the criteria for being Doomers to describe themselves as such vs others from the outside calling them that. Though this could be because when you're hanging out among others with the same beliefs, you don't have to point that out via label very much.

In general though, I think one should be cautious to apply a label to people they didn't choose for themselves and mostly haven't adopted. In many other domains, that'd be deeply frowned upon as pretty hostile.

People feeling at all dismissed/ridiculed is also not going to foster healthy discourse.

To be clear! I'm not going to claim that everyone using the term means it in a negative way. Especially on Twitter where people are trying to be concise, I see the case for using a term shorter than "person with high P(Doom) who is worried about AI". I'm not sure what would be better if you need a concise term, but "AI pessimist" feels more plainly descriptive to me.

Still, I think it's better to avoid a term that some people use pejoratively even if you don't mean it that way.

Reappropriation?

Sometimes a group reappropriates a label and it's just fine. It's possible that people started calling Rationalists "Rats" for a negative connotation (and possible some did just to save syllables). That term isn't universally used by the community, but I don't think it carries much negative valence currently.

Could be the same will/would happen with "Doomer", even if came from people looking for a slur, it gets neutralized and adopted as a convenient short label.

However, I think this would still be reification of identities, which as above, I don't think helps people think clearly. I'm relatively more okay with it for the Rationalist identity. "Being a Rationalist" really is a clear group membership and not something that changes lightly in a way that is ideally less likely with one's predictions about AI. Given the current state of evidence around AI, I think more lightness and less identity is warranted.

"Doomer" is ambiguous

When I started writing this post, I thought Doomer meant someone with a P(Doom) above 80% or the like.

Polling a few people at a recent party, it became clear people interpreted it moderately differently. Various definitions for AI Doomer:

  • Someone who's P(Doom) is like 80% or higher
  • Somebody who is generally worried about AI and thinks this is a big problem worth worrying about, including if their P(Doom) were as low as 5%
  • Someone specifically in the particular MIRI cluster of researchers, e.g. Eliezer Yudkowsky, Nate Soares, and Evan Hubinger are Doomers.

Explicit definitions or "AI pessimist"/"AI concerned" is a better alternative

My preference with this post is more to surface reasons (explain) than make a call-to-action (persuade). Before hearing feedback on this, I'm not confident enough to say hey everyone, we should all do Y instead of X, but I do think these are good reasons against using the term Doomer.

I think that being descriptive is good where possible, e.g., "people who assign 80+% to Doom" or "people who think AI is worth worrying about" given what you actually mean in context. These are longer phrases, but that might be a feature not a bug. A longer phrase is a tax on talking about people when it's better to talk about ideas, arguments, and evidence.

If you must use a shorter phrase, I think a more neutral descriptive term is better. Perhaps "AI Pessimist" for someone who think outcomes are quite likely to be bad, and "AI concerned" for someone who thinks they could be bad enough to worry about.

I invite pushback though. There could be considerations and second order effects I'm not considering here.

 

  1. ^

    Also see classic LessWrong posts:

    - Use Your Identity Carefully
    - Strategic choice of identity

    and other collected in the Identity tag

  2. ^

    In fact, some defined a whole family of negative connotation "-oomer" labels. See https://knowyourmeme.com/memes/oomer-wojaks

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 7:16 AM

Unfortunately, I think the tribalization and politicization is caused by the share-with-followers social media model, not by specific words, so using or not using the word "doomer" will have a negligible effect on the amount of tribalization. You just have to accept that people who insist on using Twitter will have their sanity eroded in this way, and do what you can to compartmentalize the damage and avoid becoming a target.

[-]iceman10mo1210

Why are you posting this here? My model is that the people you want to convince aren't on LessWrong and that you should be trying to argue this on Twitter; you included screenshots from that site, after all.

(My model of the AI critics would be that they'd shrug and say "you started it by calling us AI Risk Deniers.")

[-]habryka10mo124

you started it by calling us AI Risk Deniers.

Just to check, has anyone actually done that? I don't remember that term used before. It's fine as an illustration, just trying to check whether this is indeed happening a bunch. 

Why are you posting this here? My model is that the people you want to convince aren't on LessWrong and that you should be trying to argue this on Twitter; you included screenshots from that site, after all.

It's quite commonly used by a bunch of people at Constellation, Open Philanthropy and some adjacent spaces in Berkeley. It is indeed often not meant as any kind of slur, but seems genuinely used as a way to point towards a real cluster of views.

[-]iceman10mo20

Just to check, has anyone actually done that?

I'm thinking of a specific recent episode where [i can't remember if it was AI Safety Memes or Connor Leahy's twitter account] posted a big meme about AI Risk Deniers and this really triggered Alexandros Marinos. (I tried to use Twitter search to find this again, but couldn't.)

It's quite commonly used by a bunch of people at Constellation, Open Philanthropy and some adjacent spaces in Berkeley.

Fascinating. I was unaware it was used IRL. From the Twitter user viewpoint, my sense is that it's mostly used by people who don't believe in the AI risk narrative as a pejorative.

example

example

I may have used the term myself a few times.

[-]Ruby10mo50

Reflecting on it some more, I think the audience I implicitly had in mind for this is (a) people in my rough social network who use the term, not with any strong desire to mock or dismiss, e.g. Constellation, OpenPhil as Habryka mentioned elsethread. I also think it's even spread closer to home AI pessimist territory (Raemon told me he'd recently used it), so I'd address people just picking it up as the default term for the cluster. Basically people who still wish to engage productively, but have found themselves adopting the term as it spreads. And such people are on LessWrong (and even if they weren't, I might post here and then link to it).

The Twitter images are less than that's my target audience and more evidence for negative-valenced usage.

@clone of saturn @iceman @Herb Ingram I'm responding with this here rather than replying individually to your comments.

[-]Buck10mo41

I use the term "doomer" to refer to people with a cluster of beliefs that I associate with LW users/MIRI people, including:

  • high P(doom)
  • fast takeoffs (i.e. little impact of AI on the world before AI is powerful enough to be very dangerous)
  • transformative models will be highly rational agents

I don't think I can use "AI pessimist" as an alternative, because that only really describes the first of those beliefs, and often I want to refer to the cluster as a whole.

Maybe I should say MIRI-style pessimist?

[-]Ruby10mo50

Yeah, I like "MIRI-style pessimist". Not free of ambiguity, but much more precise and doesn't have negative valence baked in.

[-]Raemon10mo40

I think MIRI-style pessimists is actually pretty good. There are in fact other style pessimists and it's good to not conflate their views.

[-]Dagon10mo42

I mean, sure.  This applies to almost every label for a group of humans.  It's not nuanced enough nor applied specifically enough to usefully discuss anything or think clearly about any topic.  

But that's not the purpose of such a label.  The purpose is to summarize an entire argument into a very small meme, so it can be easily ignored.  the appropriate reaction is to recognize it as the slur it is, and treat the user as non-serious, uninterested in any details of a debate.

[-]Ruby10mo20

As I mention in the post, I don't think it's the same as every label. "Rationalist" and "Effective Altruist" are self-generated and applied labels for broader ideologies, vs a more narrow empirical belief for which evidence/reasoning continues to develop, and people can change their minds on frequently, and is the subject of much more active political pressures right now.

I agree some people are looking for a term so they can summarize and ignore, but some people are more looking to describe something, to which I say, maybe don't use that term?

Who is the target audience for this?

I doubt anyone has been calling themselves a "doomer". There are people on this site who wouldn't ever get called that but I haven't seen anyone else here label anyone a "doomer" yet. So it seems that you're left with people who don't frequent this site and would probably dismiss your arguments as "a doomer complaining about being called a doomed"?

Did I miss people call each other "doomer" on LW? Did you also post something like this on Twitter?

[-]dr_s10mo10

I think sometimes it happens, either as a form of shortcut (Twitter character limits, alas) or a sort of reclamation of the terms. Usually in response to someone who used it demeaningly first.

If you want people to stop calling doomers "doomers", you need to provide a specific alternative. Gesturing vaguely at the idea of alternatives isn't enough. "Thou shalt not strike terms from others' expressive vocabulary without suitable replacement." 

Doomers used to call themselves the "AI safety community" or "AI alignment community", but Yudkowsky recently led a campaign to strike those terms and replace them with "AI notkilleveryoneism". Unfortunately the new term isn't suitable and hasn't been widely adopted (e.g. it's not mentioned in the OP), which leaves the movement without a name its members endorse.

People are gonna use *some* name for it, though. A bunch of people are spending tens of millions of dollars per year advocating for a very significant political program! Of course people will talk about it! So unless and until doomers agree on a better name for themselves (which is properly the responsibility of the doomers, and not the responsibility of their critics) my choices are calling it "AI safety" and getting told that no, that's inaccurate, "AI safety" now refers to a different group of people with a different political program, or else I can call it "doomers" and get told I'm being rude. I don't want to be inaccurate or rude, but if you make me pick one of the two, then I'll pick rude, so here we are.

If the doomers were to agree on a new name and adopt it among themselves, I would be happy to switch. (Your "AI pessimist" isn't a terrible candidate, although if it caught on then it'd be subject to the same entryism which led Yudkowsky to abandon "AI safety".) Until then, "doomer" remains the most descriptive word, in spite of all its problems.

[-]dr_s10mo10

my choices are calling it "AI safety" and getting told that no, that's inaccurate, "AI safety" now refers to a different group of people with a different political program

Wait, who?

If the doomers were to agree on a new name and adopt it among themselves

Honestly I think the only hope is unilateral action, start a catchy name and see if it achieves virality.

Yudkowsky says it's now "short-term publishable, fundable, 'relatable' topics affiliated with academic-left handwringing"

I assume this means, like, Timnit Gebru and friends.

[-]dr_s10mo20

Yudkowsky is wrong then, that's what usually people refer to as AI ethics.

[-]gjm10mo20

I don't think "AI pessimist" is a good term for what is currently sometimes expressed by "doomer", because many people are very pessimistic about AI in ways that don't have anything to do with doom. For instance, I don't think it would be reasonable to say that Timnit Gebru and Emily Bender are not AI pessimists, but they are savagely critical of the "doom" position and its adherents.