[Context: This post is aimed at all readers who broadly agree that the current race toward superintelligence is bad, that stopping would be good, and that the technical pathways to a solution are too unpromising and hard to coordinate on to justify going ahead.]
TL;DR: We address the objections made to a statement supporting a ban on superintelligence by people who agree that a ban on superintelligence would be desirable.
Quoting Lucius Bushnaq:
I support some form of global ban or pause on AGI/ASI development. I think the current AI R&D regime is completely insane, and if it continues as it is, we will probably create an unaligned superintelligence that kills everyone.
We have been circulating a statement expressing ~this view,... (read 3895 more words →)
I think your advice is a quite clear articulation of the strategy behind the CAIS statement. I think this is a great and difficult strategy to pursue. Indeed, I have been circulating https://superintelligence-statement.org/ for some time now (secretly), and I think this statement first ensured it would get some of the major signatories before even making its way to me.
I think the OP is both talking about object-level reasons people gave, and it also shows that there is another strategy one might pursue which is to go around and convince people to actually support an existing statement publicly once they support it privately.