The ability to make arbitrary public binding precommitments seems like a powerful tool for solving coordination problems.
We'd like to be able to commit to cooperating with anyone who will cooperate with us, as in the open-source prisoner's dilemma (although this simple case is still an open problem, AFAIK). But we should be able to do this piece-meal.
It seems like we are moving in this direction, with things like Etherium that enable smart contracts. Technology should enable us to enforce more real-world precommitments, since we'll be able to more easily monitor and make public our private data.
Optimistically, I think this could allow us to solve coordination issues robustly enough to have a very low probability of any individual actor making an unsafe AI. This would require a lot of people to make the right kind of precommitments.
I'm guesing there are a lot of potential downsides and ways it could go wrong, which y'all might want to point out.
People will be incentivized to share private things if robust public precommitments become available, because we all stand to benefit from more information. Because of human nature, we might settle on some agreement where some information is private, or differentially private, and/or where private information is only accessed via secure computation to determine things relevant to the public interest.
We have precommitments already. It's just that every time someone follows through on one, people at LW are eager to jump on them for being irrational because they obviously made the choice that produces less of what they want than some alternative choice. But emotional reactions that predictably lead to "irrational" behavior are forms of precommitment.
Of course this doesn't lead to arbitrary precommitments.