dejb comments on Mini advent calendar of Xrisks: nuclear war - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.
If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.
But if you have a complex, non-linear system, or just one that's too big to simulate in complete detail, this is no longer the case. Getting a useful simulation then requires that you make a lot of educated guesses about what factors to include in your simulation, and how to approximate effects you can't calculate in any detail. The probability of getting these guesses right the first time is essentially zero - you're lucky if the behavior of your initial model has even a hazy resemblance to anything real, and it certainly isn't going to come within an order of magnitude of being correct.
The way you get to a useful model is through a repeated cycle of running the simulator, comparing the (wrong) results to reality, making an educated guess about what caused the difference, and trying again. With something relatively simple like, say, turbulent fluid dynamics, you might need a few hundred to a few thousand test runs to tweak your model enough that it generates accurate results over the domain of input parameters that you're interested in.
If you can't run real-world experiments to generate the phenomena you're interested in, you might be able to substitute a huge data set of observations of natural events. Astronomy has had some success with this, for example. But you need a data set big enough to encompass a representative sample of all the possible behaviors of the system you're trying to simulate, or else you'll just gets a 'simulator' that always predicts the few examples you fed it.
So, can you see the problem with the nuclear winter simulations now? You can't have a nuclear war to test the simulation, and our historical data set of real climate changes doesn't include anything similar (and doesn't collect anywhere near as many data points as a simulator needs, anyway). But global climate is a couple of orders of magnitude more complex than your typical physics or chemistry sims, so the need for testing would be correspondingly greater.
The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.
This really is a pretty un-bayesian way of thinking - the idea that we should totally ignore incomplete evidence. And by extension that we should chose to believe an alternative hypothesis (''no nuclear winter') with even less evidence merely because it is assumed for unstated reasons to be the 'default belief'.
An uncalibrated sim will typically give crazy results like 'increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees' or 'one large forest fire will trigger a permanent ice age'. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand - it provides evidence about the beliefs of the programmer, but nothing else.