AI Impacts has a related project, where we look at Resisted Technological Temptations and try to figure out "under what circumstances can large concrete incentives to pursue technologies be overcome by forces motivated by uninternalized downsides, such as ethical concerns, risks to other people with no recourse, or risks the decisionmaker does not believe in."
Our current non-exhaustive list of plausible cases includes:
We have not published our investigations into any of these particular cases yet, but hope to soon. If you would like to talk with us about what you would find most decision-relevant from this project, please let me or Rick Korzekwa know.
Nuclear safety regulations and bureaucracy
They have made nuclear power marginally safer, at the cost of both limiting innovation and disallowing safer new nuclear power plants, and at great cost
And it was intentional on the part of lobbyists and those who have reinforced the legislation, but likely not on the part of the original lawmakers
Immigration law
They have greatly reduced wealth and economic power of countries that have strict rules
They were intentionally pursuing racist policies, but the economic impacts were most likely unintended
Nuclear energy. In some countries this was crippled deliberately because of fear (perhaps due to association with nuclear weapons), and in other countries this seems to have been an accidental byproduct of safety culture aka the US or France are fairly gung-ho on nuclear but haven't made any huge progress because of buerocracy.
Genetically modified organisms in Europe. Also seems to be fear oriented.
Research into the genetic basis of intelligence, which could help eg. polygenic screening. This screening is already very common in certain countries but there are limits on what they are allowed to do or even know.
In order to seriously consider promoting policies aimed at slowing down progress toward transformative AI, I want a better sense of the reference class of such policies.