A long-form iteration of my "AI will lead to massive violent population reduction" argument: https://populectomy.ai

The host name, populectomy, is my attempt at naming the described outcome, a name that I hope to be workable (sufficiently evocative and pithy, without being glib). Otherwise I'm out 150 USD for the domain registration, ai domains come at a premium.

I've mimicked the paper-as-website model with <bad-outcome>.ai domain name used by @Jan_Kulveit , @Raymond D  @Nora_Ammann , @Deger Turan , David Krueger, and @David Duvenaud for Gradual Disempowerment. Mimicry being the highest form of flattery and what-not. Nice serif font styling like theirs is on my wish list.

Here's my previous post on the topic.

A few words may shortcut reading the whole thing, especially if you've read the previous post:

  1. In "Shell Games and Flinches" @Jan_Kulveit provides a "shortest useful summary" to the Gradual Disemplowerment paper's core argument: 

    "To the extent human civilization is human-aligned, most of the reason for the alignment is that humans are extremely useful to various social systems like the economy, and states, or as substrate of cultural evolution. When human cognition ceases to be useful, we should expect these systems to become less aligned, leading to human disempowerment."

    I basically agree with that statement. However, I think it is effectively trumped by this one, the shortest useful summary of Populectomy: Human civilization is a system of large-scale human cooperation made possible by the fact that killing many humans requires many other willing human collaborators who don't want to themselves be killed, making cooperation better than elimination. When human allies ceases to be necessary for the elimination of human rivals, we should expect (mass) human civilization to cease.

  2. The conceit of humanity as a shared project is very useful for maintaining human cooperation. However, I think it encourages a blind spot when big questions about the effects of new technology are framed as "what will this do to humanity?" Killing being a form of disempowerment, I agree that most humans will be disempowered. And yet, since it is the kind of disempowerment that takes them off the board, the end result is not a humanity that is disempowered.
  3. I feel the shared humanity conceit, as emphasized by EA-style universalism, creates some awkwardness around how we define "bad outcomes." I.e., what if we were forced to choose between AI killing all humans and a few humans killing all the others (and then happily living ever after)? Fortunately, I don't think arguments about the relative expected value of either outcome are necessary or helpful.
  4. Where "happily ever after" actually does matter is the essay's claim that the better human future has a very low population, with a carefully selected set of life-improving technology. This future could possibly be achieved in a managed, non-violent way (I propose an "if a life-ending asteroid were on its way and a New Earth were reachable by a small number of refugees, how would we organize ourselves to ensure the continuation of the species?" thought experiment). Between the risks of catastrophic misalignment, gradual disempowerment, and populectomy, there's an overwhelming case to resist AI development (as if our lives depend on it), and to steer to a better path.
  5. The democracy-dissolving character of AI identified in Gradual Disempowerment helps clarify that one ought not to put much hope or faith in democratic processes and policies. The better option may be low-coordination normative and cultural resistance.
New Comment
2 comments, sorted by Click to highlight new comments since:

Please tell me if I'm understanding this correctly. The main arguments are:

  1. There are currently a lot of humans because we can do more things with more humans.
  2. With advancing technology (specifically AI) we won't need more humans to do more things.

From these two arguments the assumption is that AI will have an incentive to keep human population numbers low or at zero. If my understanding to this point is correct, what do you believe humans now should do knowing these assumptions?

That summary doesn't sound to me to be in the neighborhood of the intended argument. I would be grateful if you pointed to passages that suggest that reading so that I can correct them (DM me if that's preferable).

Where I see a big disconnect is your conclusion that "AI will have an incentive to do X." The incentives that the essay discusses are human incentives, not those of a hypothetical artificial agent.

Curated and popular this week