Correct. It lacks tactical practicality right now, but I think that from a macro-directional perspective, it's sensible to align all of my current actions to that end goal. And I believe there is a huge demand among business minded intellectuals and ambitious people for a community like this to be created.
AI isn't really new technology though, right? Do you have evidence of alarmists around AI in the past?
And do you have anecdotes of intelligent/rational people being alarmist about a technology that turned out to be false?
I think these pieces of evidence/anecdotes would strengthen your argument.
What is your estimated timeline for humanity's extinction if it continues on its current path?
What information are you using for the foundation of your beliefs around the progress of science & technology?
How do you think competent people can solve this problem within their own fields of expertise?
For example, the EA community is a small & effective community like you've referenced for commonplace charity/altruism practices.
How could we solve the median researcher problem & improve the efficacy & reputation of altruism as a whole?
Personally, I suggest taking a marketing approach. If we endeavor to understand important similarities between "median researchers", so that we can talk to them in the language they want to hear, we may be able to attract attention from the broader altruism community which can eventually be leveraged to place EA in a position of authority or expertise.
What do you think?
What do you mean by red flag? Red flag on the author's side? If so, I don't understand your sentiment here.
Partisan issues exist.
I don't understand what you're saying here, but I want to understand.
Can you explain it like I'm 5?
Could I get some constructive criticism about why I'm being downvoted? It would be helpful for the sake of avoiding the same mistakes in the future.