Reward is not creation of uncontrolled AGI. Reward is creation of powerful not-yet-AGI systems which can drastically accelerate technical, scientific or military progress of country.
It's pretty huge potential upside, and consequences of other superpower developing such technology can be catastrophic. So countries have both reward for defecting and risk to lose everything if other country defects.
Yes, such "AI race" is very dangerous. But so was nuclear arms race, and countries still did it.
Oh I don't think anyone is going to be convinced not to build not-yet-AGI.
But it seems totally plausible to convince people not to build systems that they think have a real possibility of killing them, which, again, consequentialists will do because we don't know how to build an off-switch.
Reward is not creation of uncontrolled AGI. Reward is creation of powerful not-yet-AGI systems which can drastically accelerate technical, scientific or military progress of country.
It's pretty huge potential upside, and consequences of other superpower developing such technology can be catastrophic. So countries have both reward for defecting and risk to lose everything if other country defects.
Yes, such "AI race" is very dangerous. But so was nuclear arms race, and countries still did it.