Here is a list of Q&A from https://aisafety.info/ . When I discovered the site, I was impressed by the volume of material produced. However, the interface is optimized for beginners. The following table of contents is for individuals who wish to navigate the various sections more freely. It was constructed by clustering the Q&A into subtopics. I'm not involved with aisafety.info, I just want to increase the visibility of the content they produced by presenting it in a different way. They are also working on a new interface. This table can also be found https://aisafety.info/toc/.
Here is a list of Q&A from https://aisafety.info/ . When I discovered the site, I was impressed by the volume of material produced. However, the interface is optimized for beginners. The following table of contents is for individuals who wish to navigate the various sections more freely. It was constructed by clustering the Q&A into subtopics. I'm not involved with aisafety.info, I just want to increase the visibility of the content they produced by presenting it in a different way. They are also working on a new interface. This table can also be found https://aisafety.info/toc/.
🆕 New to AI safety? Start here.
📘 Introduction to AI Safety
🧠 Introduction to ML
🤖 Types of AI
🚀 Takeoff & Intelligence explosion
📅 Timelines
❗ Types of Risks
🔍 What would an AGI be able to do?
🌋 Technical source of unalignment
🎉 Current prosaic solutions
🗺️ Strategy
💭 Consciousness
❓ Not convinced? Explore the arguments.
🤨 Superintelligence is unlikely?
😌 Superintelligence won’t be a big change?
⚠️ Superintelligence won’t be risky?
🤔 Why not just?
🧐 Isn't the real concern…
📜 I have certain philosophical beliefs, so this is not an issue
🔍 Want to understand the research? Dive deeper.
💻 Prosaic alignment
📝 Agent foundation
🏛️ Governance
🔬 Research Organisations
🤝 Want to help with AI safety? Get involved!
📌 General
📢 Outreach
🧪 Research
🏛️ Governance
🛠️ Ops & Meta
💵 Help financially
📚 Other resources