A statement of concern signed by all (famous) major players and many other respected technologists https://www.safe.ai/statement-on-ai-risk
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Signatories:
I wont be offended if someone from an institution wants to start another one/completely replace all of the content (if so go ahead), but I've made the tiniest start here. I invite others to add/edit/take over. https://docs.google.com/spreadsheets/d/1rJkw0YLe9XMe1Zi_QdWzPwJt2Kld6HeLn1ehlyWos4A/edit#gid=0
I'd suggest ordering the list according to which names we most expect to make a difference to the kind of audience would need to see a list like this, so don't place Elon Musk at the top, or possibly at all, unless you can think of an exceptionally convincing "relevant qualifications" section for him.
Remembering Scott's AI Researchers On AI Risk. The debate about AI is heating up, and a lot of people rely on authority figures for knowing what to believe. Do we have a current list of notable AI X-risk believers? Would there be value in compiling such a list?