Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?
Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?