Disagreements over the prioritization of existential risk from AI
Earlier this year, the Future of Life Institute and the Center for AI Safety published open letters that promoted existential risks (x-risks) from AI as a global priority. In July, Google Research fellow Blaise Aguera y Arcas and MILA-affiliated AI researchers Blake Richards, Dhanya Sridhar, and Guillaume Lajoie co-wrote an...
Thanks for the feedback! I agree that it's hard to balance between "being succinct" and "answering every pet objection", and collapsing sections could help. A few questions: