Beyond any rationality about the banning, it won't go too far, it won't happen, because the geopolitical game between superpowers which already have some present existential risks is right now bigger than any future risks could emerge from advanced AI development. And if you have not sufficiently developed AI technology at some point in the geopolitical game, you may be well into the bigger existential risk since not having nuclear weapons.
So, there you go, do you think present almost unexistent risks (albeit feasible to be present really close in the future), can overweight the other multiple, hotter, present existential risks?
Beyond any rationality about the banning, it won't go too far, it won't happen, because the geopolitical game between superpowers which already have some present existential risks is right now bigger than any future risks could emerge from advanced AI development. And if you have not sufficiently developed AI technology at some point in the geopolitical game, you may be well into the bigger existential risk since not having nuclear weapons.
So, there you go, do you think present almost unexistent risks (albeit feasible to be present really close in the future), can overweight the other multiple, hotter, present existential risks?