Morality must scale to be useful. A common failure mode for people is to engage in "performative morality" where they choose low-cost signaling over high-cost action. What's easier to do—starting a sustainable energy company (and scaling it) or donating $10 to climate change activism and tweeting about the seas rising 24/7?
But "easy" doesn't mean "effective". A sustainable energy company has 1000x the impact of a $10 donation on reversing climate change. Sure, not everyone can—or should—build a company; but then it shouldn't be the case that the person don...
I'm yet to read the paper, but my initial reaction is that this is a classic game-theoretical problem where players have to weigh up the incentives to defect or cooperate. For example, I'm not sure if Manhattan Project-style effort for AI in the US is extremely unreasonable when China already has something of that sort.
My weakly held opinion is that you cannot get adversarial nation-states at varying stages of developing a particular technology to mutually hamstring future development. China is unlikely to halt AI development (it is already moving to... (read more)