In government, it’s not just a matter of having the best policy; it’s about getting enough votes. This creates a problem when the self-interests of individual voters don’t match the best interests of the country.
For instance, voting researchers widely consider the presidential voting system in America to be inferior to many alternatives. But if you want to change it, you require consent from Democrats and Republicans—i.e. the very people who benefit from the status quo.
Or consider the land-value tax. This tax is considered among economists to be uniquely efficient (i.e. it causes zero reduction in the good being taxed). When implemented correctly, it can address edge cases, such as new property developments, and can even prevent reductions in new land production, like the creation of artificial islands. However, this policy is against the interests of current land owners. So any politician who advocates this policy would not only fail but also likely end their political career.
What do these policies have in common? Well, both policies yield long-run benefits, and as we’ll see, they impose short-run costs. (If you disagree that these policies are actually beneficial in the long run, I’m sure you can think of policies that you like that have long-run benefits and short-run costs. The examples I’ve given are simply for clarity.)
What if rather than asking American politicians to vote against their own interests, we ask them to pass a policy today that will only be enacted after a 100-year period? Significant advances in medicine notwithstanding, by that time, most of today’s politicians will be dead. In other words, they no longer have to vote against their own interests.
The same strategy can be applied to land value taxes. If today, we passed a policy for a 100-year delayed land-value tax, the effect on house prices is approximately zero (see the widely-accepted net present value model of asset prices).
I believe this strategy offers a significant opportunity. Policy in the EA community is often seen as too hard because of how crowded it is. Yet almost no policy makers are thinking about the future in 100 years. It might be possible to pass a lot of “unpassable” policies. We just have to ensure the policies we propose are actually good (see my unfinished series on the topic), and have large barrier to reversal, so that the politicians of the future can’t renege on the government’s commitment when the day of implementation arrives.
I have a follow-up post here: Enforcing Far-Future Contracts for Governments.
Why is it that mathematicians are confident about their results? It's evident that they are highly confident. And it's evident that they're justified in being confident. Their results literally hold for the rest of time. They're not going to flip in 100 years. So why is this the case?
Basically, there are a few stages of belief. Sometimes a belief is stated on its own without anything else. Sometimes a justification is given for that belief. And sometimes it's explained why that justification is a reliable indicator of truth. Now, you may think you have infinite regress here, but that's wrong in practice: You eventually reach a point where your justification is so trivially obvious that it almost feels silly to even list those justifications as assumptions in your argument.
Voting systems have been figured out to the level of standard mathematical detail (i.e. belief + justification). And my methodology post that I linked to you is explaining why justifications of the form they use are unambiguously correct in the world of policy. (Again that series is not finished yet, but the only roadblock is making it entertaining to read, I've already figured out the mathematical details.)
So to me, arguing against a voting system change is like saying “Maybe there are a finite number of primes” or “Maybe this table I'm resting my arms on right now doesn't actually exist”. I.e. these really are things that we can, for all intents and purposes, be certain of. And if you're not certain of these basic things, we can't really ever discuss anything productively.
It's not a matter of the Dunning–Kruger effect; it's that experts understand these problems well enough. You can find professors who specialise in voting theory and ask them. Ask them “Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?” The amount of time is totally irrelevant when you understand a problem well enough. One plus one will always equal two.
Conversely, AI safety's whole problem is that we don't have anything like that. We have no confidence that we can control these systems. We have proposals, we have justifications for those proposals, but we have no reason to believe that those justifications reliably lead to truth.
To be clear, I'm not saying every policy problem is solved. But some policy problems are solved. (Or in the case of voting theory, sufficiently solved as to far outperform the current system, and we know that there is no unknown system will blow our current proposals out of the water due to Arrow's impossibility theorem.) And establishing some of those policies is difficult because of short-term incentives. This delay tactic is a way to implement that specific subset of policies, and only that subset.
Denying this would require you to think that no such policies exists. Which would commit you to say, “Hey, maybe the Saudi Arabian policy of cutting off a child-thief's hand shouldn't be revoked in 50 years. Who can say whether that'll be a good policy at that point?”