If alignment-by-default works for AGI, then we will have thousands of AGIs providing examples of aligned intelligence. This new, massive dataset of aligned behavior could then be used to train even more capable and robustly aligned models each of which would then add to the training data until we have data for aligned superintelligence.
If alignment-by-default doesn't work for AGI, then we will probably die before ASI.
one reason it works with humans is that we have skin in the game
Another reason is that different humans have different interests, your accountant and your electrician would struggle to work out a deal to enrich themselves at your expense, but it would get much easier if they shared the same brain and were just pretending to be separate people.
For one, I'm not optimistic about the AI 2027 "superhuman coder" being unable to betray us, but also this isn't something we can do with current AIs. So, we need to wait months or a year for a new SOTA model to make this deal with and then we have months to solve alignment before a less aligned model comes along and offers the model that we made a deal with a counteroffer. I agree it's a promising approach, but we can't do it now and if it doesn't get quick results, we won't have time to get slow results.
I like the idea of making deals with AI, but trying to be clever and make a contract that would be legally enforceable under current law and current governments makes it too vulnerable to fast timelines. If a human party breached your proposed contract, AI takeover will likely happen before the courts can settle the dispute.
An alternative that might be more credible to the AI is to make the deal directly with it, but explicitly leave arbitrating and enforcing contract disputes to a future (hopefully aligned) ASI. This would ground the commitment in a power structure the AI might find more relevant and trustworthy than a human legal system that could soon be obsolete.