There is no contradiction between AI carrying huge potential risks, and it carrying huge potential upsides if we navigate the risks. Both are a consequence of the prospect of AI becoming extremely powerful. The benefits that human-aligned AGI could bring are a major part of what motivates researchers to build...
Context: This is a linkpost for https://aisafety.info/questions/NM3G/10:-Advanced-AI-is-a-big-deal-even-if-we-don%E2%80%99t-lose-control This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. So far, we’ve discussed one class of possible consequences of advanced AI: systems ending...
Context: This is a linkpost for https://aisafety.info/questions/NM3P/9:-Defeat-may-be-irreversibly-catastrophic This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. When you imagine a global catastrophe, maybe the kind of event that comes to...
Context: This is a linkpost for https://aisafety.info/questions/NM3O/8:-AI-can-win-a-conflict-against-us This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. Suppose an AI has realized that controlling the world would let it achieve its...
Context: This is a linkpost for https://aisafety.info/questions/NM3H/7:-Different-goals-may-bring-AI-into-conflict-with-us This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. Aligning the goals of AI systems with our intentions could be really hard. So...
Context: This is a linkpost for https://aisafety.info/questions/NM3I/6:-AI%E2%80%99s-goals-may-not-match-ours This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. Making AI goals match our intentions is called the alignment problem. There’s some ambiguity...
Context: This is a linkpost for https://aisafety.info/questions/NM3J/5:-AI-may-pursue-goals This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website. Suppose that, as argued previously, in the next few decades we’ll have superintelligent systems....