Please tell me if I'm understanding this correctly. The main arguments are:
From these two arguments the assumption is that AI will have an incentive to keep human population numbers low or at zero. If my understanding to this point is correct, what do you believe humans now should do knowing these assumptions?
That summary doesn't sound to me to be in the neighborhood of the intended argument. I would be grateful if you pointed to passages that suggest that reading so that I can correct them (DM me if that's preferable).
Where I see a big disconnect is your conclusion that "AI will have an incentive to do X." The incentives that the essay discusses are human incentives, not those of a hypothetical artificial agent.
A long-form iteration of my "AI will lead to massive violent population reduction" argument: https://populectomy.ai
The host name, populectomy, is my attempt at naming the described outcome, a name that I hope to be workable (sufficiently evocative and pithy, without being glib). Otherwise I'm out 150 USD for the domain registration, ai domains come at a premium.
I've mimicked the paper-as-website model with <bad-outcome>.ai domain name used by @Jan_Kulveit , @Raymond D @Nora_Ammann , @Deger Turan , David Krueger, and @David Duvenaud for Gradual Disempowerment. Mimicry being the highest form of flattery and what-not. Nice serif font styling like theirs is on my wish list.
Here's my previous post on the topic.
A few words may shortcut reading the whole thing, especially if you've read the previous post:
In "Shell Games and Flinches" @Jan_Kulveit provides a "shortest useful summary" to the Gradual Disemplowerment paper's core argument:
"To the extent human civilization is human-aligned, most of the reason for the alignment is that humans are extremely useful to various social systems like the economy, and states, or as substrate of cultural evolution. When human cognition ceases to be useful, we should expect these systems to become less aligned, leading to human disempowerment."
I basically agree with that statement. However, I think it is effectively trumped by this one, the shortest useful summary of Populectomy: Human civilization is a system of large-scale human cooperation made possible by the fact that killing many humans requires many other willing human collaborators who don't want to themselves be killed, making cooperation better than elimination. When human allies ceases to be necessary for the elimination of human rivals, we should expect (mass) human civilization to cease.