We frequently speak about AI capability gain being bad because it shortens the timeframe for AI safety research. In that logic, taking steps to decrease AI capability would be worthwhile.
At the moment the large language models are trained with a lot of data without the company, that trains the language model, licensing the data. If there would be a requirement to license the required data, that would severely reduce the available data for language models and reduce their capabilities.
It's expensive to fight lawsuits in the United States. Currently, there are artists who feel like their rights are violated by Dalle 2 using their art as training data. Similar to how Thiel funded the Gawker lawsuits, it would be possible to support artists in a suit against OpenAI to require OpenAI to license images for training Dalle 2. If such a lawsuit is well-funded it will be much more likely that a precedent for requiring data licensing gets set which would slow down AI development.
I'm curious about what people who think more about AI safety than myself think about such a move. Would it be helpful?
Putin's military seems to run out of high precision ammunition and does not do well on the battlefield currently. It's hard to say how much of this is due to export controls and how much is due to other factors.
North Korea doesn't seems to have a lot of reliable intercontinental missiles. Their tech development seems quite slowed down. It isn't zero but also not fast.
China's tech advancement are a result of the West outsourcing a lot of tech production to China and not having embargos. I don't know much about Chinese military tech.
Iran still doesn't have nuclear weapons. It might still get them, but there's certainly a slowdown of development.
Nuclear and ballistic technology is pretty well understood for decades developing AGI without having existing designs to copy will be much harder.