It's very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms.
This is pointing at an ongoing bravery debate: I'm sure the feeling is real; but also, "AGI-style" people see their concerns being ignored & minimized by the "immediate problems" people, and so feel like they need to get even more strident.
This dynamic is really bad, I'm not sure what the systemic solution is, but as a starting point I would encourage people reading this to vocally support both immediate problems work and long term risks work rather than engaging in bravery-debate style reasoning like "I'll only ever talk about long term risks because they're underrated in The Discourse". Obviously, do this only to the extent that you actually believe it! But most longtermists believe that at least some kinds of immediate problems work is valuable (at least relative to the realistic alternative which, remember, is capabilities work!), and should be more willing to say so.
Ajeya's post on aligning narrow models and the Pragmatic AI Safety Sequence come to my mind as particularly promising starting points for building bridges between the two worlds.
Support.
I would add to this that The Alignment Problem by Brian Christian is a fantastic general audience book that shows how the immediate and long-term AI policy really are facing the same problem and will work better if we all work together.
Reading this has been an absolute fever dream. That's not something that happens when it's mostly or totally inaccurate, like various clickbait articles from major news outlets covering AI safety.
One thing it seems to get wrong is the typical libertarian impulse to overestimate the sovereignty of major tech companies. In the business world, they are clearly the big fish, but in the international world, it's pretty clear that their cybersecurity departments are heavily dependent on logistical/counterintelligence support from various military and intelligence agencies. Corporations might be fine at honeypots, but they aren't well known for being good at procuring agents willing to spend years risking their lives by operating behind enemy lines.
There are similar and even stronger counterparts in Chinese tech companies. Both sides of the Pacific have a pretty centralized and consistent obsession with minimizing the risk of being weaker on AI, starting 2018 at the latest (see page 10).
Some choice picks: