This is a link-post to a piece I just posted to the EA Forum, discussing negative aspects of Eliezer Yudkowsky's forecasting track record. In case it also receives significant discussion here, you may also want to look at the comments on the Forum (e.g. Gwern just posted a useful critical comment there).
It seems possible to me that you're witnessing a selection bias where the part of the field who disagree with Eliezer don't generally bother to engage with him, or with communities around him.
It's possible to agree on ideas like "it is possible to create agent AGI" and "given the right preconditions, AGI could destroy a sizeable fraction of the human race", while at the same time disagreeing with nearly all of Eliezer's beliefs or claims on that same topic.
That in turn would lead to different beliefs for what types of approach will work, which could go a long way towards explaining why so many AI research labs are not pursuing ideas like pivotal acts or other Eliezer-endorsed solutions.
For example, the linked post didn't use this quote when discussing Eliezer's belief that intelligence doesn't require much compute power, but as recently as 2021 (?) he said
"or maybe not, I don't know" is doing a lot of work in covering that statement.