If they are open-source, then doesn't it mean that anyone can check how the models' alignment is influenced by training or adding noise? Or does it mean that anyone can repeat the training methods?
Generally the releases of the "open source" models release the inference code and the weights, but not the exact training data, and often not information about training setup. (for instance, Deepseek has done a pile of hacking on how to get the most out of their H800s, which is private)
It looks like the compromising between those who want to ban Chinese AI models in the United States in the c rrent administration will be to allow the models but require some safety standards to prevent China from harming the United States.
While we generally would want more AI regulation than the current administration is willing to create, this is a window where the AI safety community potentially can affect safety policy.
Within the existing political constraints, what standards for the Chinese models should we wish for?