Outreach success: Intro to AI risk that has been successful
"I think of my life now as two states of being: before reading your doc and after." - A message I got after sharing this article at work. When I first started reading about alignment, I wished there was one place that fully laid out the case of AI risk from beginning to end at an introductory level. I found a lot of resources, but none that were directly accessible and put everything in one place. Over the last two months, I worked to put together this article. I first shared it internally, and despite being fairly long, it garnered a positive reception and a lot of agreement. Some people even said it convinced them to switch to working on alignment. I hope this can become one of the canonical introductions to AI alignment. ------ A gentle introduction to why AI *might* end the human race > “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” > — the CEOs of OpenAI (of ChatGPT), Google DeepMind, Anthropic, and 2 of 3 researchers considered “the godfathers of AI” I’ve thought long and hard about how to start this post to simultaneously intrigue the reader, give the topic a sense of weight, and encourage an even-headed discussion. The best I’ve got is a simple appeal: I hope you make it to the end of this post while keeping an open mind and engaging deeply with the ideas presented here, because there’s a chance that our survival as a species depends on us solving this problem. This post is long because I’m setting out to convince you of a controversial and difficult topic, and you deserve the best version of this argument. Whether in parts, at 1AM in bed, or even while sitting on the toilet, I hope you make time to read this. That’s the best I’ve got. The guiding questions are: - Are we ready, within our lifetimes, to become the second most intelligent ‘species’ on the planet? - The single known time in history that a species achieved general intelligence, it used its i
Well, what base rates can inform the trajectory of AGI?
Would be an interesting exercise to do to flesh this out.