This is a special post for quick takes by irving. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:
Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
Currently, they seem to have a lot of trouble explaining the motivation. The "How" steps are a lot easier.
This post was meant as a summary of common rebuttals. I haven't actually heard much questioning of motivation, as instrumental convergence seems fairly intuitive. The more common question asked is how an AI could actually physically achieve the destruction.