AI existential risk has been in the news recently. A lot of people have gotten interested in the problem and some want to know what they can do to help. Additionally, other existing routes to getting advice are getting overwhelmed, like AI Safety Support, 80,000 Hours, AGI Safety Fundamentals, AI Safety Quest, etc. With this in mind, we’ve created a new FAQ as a part of Stampy’s AI Safety Info, based mostly on ideas from plex, Linda Linsefors, and Severin Seehrich. We're continuing to improve these articles and we welcome feedback.
By starting at the root of the tree and clicking on the articles at the bottom of each article, you can navigate to the article that most applies to your situation. It branches out into the rest of AISafety.info as well.
AI existential risk has been in the news recently. A lot of people have gotten interested in the problem and some want to know what they can do to help. Additionally, other existing routes to getting advice are getting overwhelmed, like AI Safety Support, 80,000 Hours, AGI Safety Fundamentals, AI Safety Quest, etc. With this in mind, we’ve created a new FAQ as a part of Stampy’s AI Safety Info, based mostly on ideas from plex, Linda Linsefors, and Severin Seehrich. We're continuing to improve these articles and we welcome feedback.
By starting at the root of the tree and clicking on the articles at the bottom of each article, you can navigate to the article that most applies to your situation. It branches out into the rest of AISafety.info as well.
Or you can just look at the full list here: