if you needed a reason to exit this subculture, here are several dozen including the cult of genius, ingroup-overtrust, insularity, out-of-touchness, lack of rigor, and lack of sharp culture that exists in the current environments.
timestamps are included so that you can skip around each video topic.
finally, there are 34 references in the description. I would have included more but this exceeded the character limit.
A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention -- the famous gorilla experiment -- but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.
Yes, it's possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.
Anyway -- that aside, I actually don't disagree much at all with most of what you said.
The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson's book ("Ship of Fools" p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I've ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.