I moved the big meta-level comment thread from "Yes Requires the Possibility of No" over to here, since it seemed mostly unrelated to that top-level post. This not being on frontpage also makes it easier for people to just directly discuss the moderation and meta-level norms.
That is very reasonable and fair. I think that in practice I won't write such a compilation post any time soon, because (i) I already created too much drama, (ii) I don't enjoy writing call-out posts and (iii) my time is much better spent working on AI alignment.
Upon reflection, my strong reaction was probably because my System 1 is designed to deal with Dunbar-number-size groups. In such a tribe, one voice with an agenda which, if implemented, would put me in physical danger, is already notable risk. However, in a civilization of millions the significance of one such voice is microscopic (unless it's very exceptional in its charisma or otherwise). On the other hand, AGI is a serious risk, and it's one that I'm much better equipped to affect.
Sorry for causing all this trouble! Hopefully putting this analysis here in public will help me to stay focused in the future :)