First, a meta complaint- People tend to think that complicated arguments require complicated counter arguments. If one side presents entire books worth of facts, math, logic, etc, a person doesn't expect that to be countered in two sentences. In reality, many complex arguments have simple flaws.
This becomes exacerbated as people in the opposition lose interest and leave the debate. Because the opposition position, while correct, is not interesting.
The negative reputation of doomerism is in large part, due to the fact that doomist arguments tend to be longer, more complex and more exciting than their opposition's. This does have the negative side effect that doom is important and it's actually bad to dismiss the entire category of doomerist predictions but, be that as it may...
Also- People tend to think that, in a disagreement between math and heuristics, the math is correct. The problem is, many heuristics are so reliable, that if it disagrees with your math, there’s probably an error in your math. This becomes exacerbated as code sequences extend towards arbitrary lengths, becoming complicated megaliths that, despite [being math], are almost certainly wrong.
Okay, so, the AI doomer side presents a complicated argument with lots of math combined with lots of handwaving, to posit that a plan that has always and inevitably produced positive outcomes, will suddenly proceed to produce negative outcomes, and in turn, a plan that has always and inevitably produced negative outcomes, will suddenly proceed to produce positive outcomes.
On this, I remind that AI alignment failure is something that’s already happened, and that’s why humans exist at all. This of course, proceeds from the position that evolution is obviously both intelligent and agentic.
More broadly, I see this as a rehash of the same old, tired, debate. The luddist communists point out that their philosophy and way of life cannot survive any further recursive self improvement and say we should ban (language, gold, math, the printing press, the internet, etc) and remain as (hunter gatherers, herders, farmers, peasants, craftsmen, manufacturers, programmers, etc) for the rest of time.
A World War III would not "almost certainly be an x-risk event" though.
Nuclear winter wouldn't do it. Not actual extinction. We don't have anything now that would do it.
The question was "convince me that humanity isn't DOOMED" not "convince me that there is a totally legal and ethical path to preventing AI driven extinction"
I interpreted doomed as a 0 percent probability of survival. But I think there is a non-zero chance of humanity never making Super-humanly Intelligent AGI, even if we persist for millions of years.
The longer it takes to make Super-AGI, the greater our chances of survival because society is getting better and better at controlling rouge actors as the generations pass and I think that trend is likely to continue.
We worry that tech will allow someone to make a world ending device in their basement someday, but it could also allow us to monitor every person and their basement with (narrow) AI and/or Subhuman AGI every moment, so well that the possibility of someone getting away with making Super-AGI or any other crime may someday seem absurd.
One day, the monitoring could be right in our brains. Mental illness could also be a thing of the past, and education about AGI related dangers could be universal. Humans could also decide not to increase in number, so as to minimize risk and maximize resources available to each immortal member in society.
I am not recommending any particular action right now, I am saying we are not 100% doomed by AGI progress to be killed or become pets, etc.
Various possibilities exist.