I disagree. You seem to think that the list of missing technologies sketched by Crawford is exhaustive, but it's not. One example that ties in your conclusions: paper. Maybe the Romans could have invented the printing press, I'm not sure, but printing on super-expensive vellum or papyrus is pointless.
And it's just one example. I make another. The Romans spread and improved watermills, so they were interested in labor-saving technology contra your argument. But their mills were not as good or widespread as modern or even late medieval ones. (mill technology was very important to the industrial revolution as you mention too)
You could also try to fit an ML potential to some expensive method, but it's very easy to produce very wrong things if you don't know what you're doing (I wouldn't be able for one)
Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...
My job was doing quantum chemistry simulations for a few years, so I think I can comprehend the scale actually. I had access to one of the top-50 supercomputers and codes just do not scale to that number of processors for one simulation independently of system size (even if they had let me launch a job that big, which was not possible)
Isn't this a trivial consequence of LLMs operating on tokens as opposed to letters?
True, but this doesn't apply to the original reasoning in the post - he assumes constant probability while you need increasing probability (as with the balls) to make the math work.
Or decreasing benefits, which probably is the case in the real world.
Edit: misred the previous comment, see below
It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time
I think vibrational energy is neglected in the first paper, it would be implicitly be accounted for in AIMD. Also, the higer energy state could be the lower free energy state - if the difference is big enough it could go there nearly 100% of the time.
Although they never take the whole supercomputer, so if you have the whole supercomputer for yourself and the calculations do not depend on each other you can run many in parallel
That's one simulation though. If you have to screen hundreds of candidate structures, and simulate every step of the process because you cannot run experiments, it becomes years of supercomputer time.
The point is that if the majority of the "cost of crime" is actually the cost of preventing potential crime, then it's not obvious at all that more crime prevention will help.
Sure, sometimes it's better to shift from private prevention (behavior change) to collective prevention (policing) at the margin, but not always.