Super handy seeming intro for newcomers.
I recommend adding Jade Leung to your list of governance people.
As for the list of AI safety people, I'd like to add that there are some people who've written interesting and much discussed content that it would be worth having some familiarity with.
John Wentworth
Steven Byrnes
Vanessa Kosoy
And personally I'm quite excited about the school of thought developing under the 'Shard theory' banner.
For shard theory info:
https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview
https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators
Thanks! I'll keep my opinionated/specific overview of the alignment community, but I know governance less well, so I'm happy to defer there.
Getting into AI safety involves working with a mix of communities, subcultures, goals, and ideologies that you may not have encountered in the context of mainstream AI technical research. This document attempts to briefly map these out for newcomers.
This is inevitably going to be biased by what sides of these communities I (Sam) have encountered, and it will quickly become dated. I expect it will still be a useful resource for some people anyhow, at least in the short term.
AI Safety/AI Alignment/AGI Safety/AI Existential Safety/AI X-Risk
The research project of ensuring that future AI progress doesn’t yield civilization-endingly catastrophic results.
Effective Altruism/EA
The research project and social movement of doing as much good as possible with limited resources.
Longtermism
The ethical principle that the consequences of our actions on other people matter equally wherever and whenever those consequences are felt. Because there are a potentially huge number of future people we could influence by our choices, this says that considering our influence on the longer-term future should be a central part of ethical decision-making.
The Rationalist Subculture/The LessWrong Crowd/Berkeley-Style Rationalism/The Rats
A distinctive social group focused on using reason and science as thoroughly and deeply as possible in everyday life and important life decisions.
AGI Optimism
The view that building (aligned) AGI will lead to a post-scarcity, galaxy-spanning, pluralist utopia and would be humanity’s greatest achievement.
AI Ethics/Responsible AI/The FAccT Community
The research and political project of minimizing the harms of current and near-future AI/ML technology and of ensuring that any benefits from such technology are shared broadly.
(Long-Term) AI Governance
The project of developing institutions and policies within present-day governments to help increase the chances that AI progress goes well.
Acknowledgments
Thanks to Alex Tamkin, Jared Kaplan, Neel Nanda, Leo Gao, Fazl Barez, Owain Evans, Beth Barnes, and Rohin Shah for comments on a previous version of this.