You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on Open thread, Nov. 02 - Nov. 08, 2015 - Less Wrong Discussion

4 Post author: MrMind 02 November 2015 10:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 03 November 2015 04:14:04AM 1 point [-]

Why, then, are you talking about the models' fit when answering the question of whether the "climate models can predict weather changes over long term periods" (emphasis mine)?

What other way is there? Building a time machine?

How else can you estimate the suitability of models in making predictions than testing their past predictions on current data?

Comment author: Salemicus 03 November 2015 03:58:01PM *  4 points [-]

One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.

The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.

Now, there are two plausible conclusions to this:

  • Those past mistakes have been appropriately corrected into today's models, so we don't need to worry too much about past failures.
  • This is like Paul Samuelson's economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.
Comment author: passive_fist 03 November 2015 08:33:15PM 1 point [-]

One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.

It's not as simple as that. Most models give predictions that are conditional on input data to the models (real rate of CO2 production, etc.). To analyze the predictions from, say, a model developed in 1990, you need to feed the model input data from after 1990. Otherwise you get too wide an error margin in your prediction.

The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.

True. As I said, this is definitely evidence towards the suitability of the models, and certainly seems to be counter to the claim that "there is no evidence that climate models are valuable in predicting future climate trends.

This is like Paul Samuelson's economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.

That's definitely a possibility, but it's reasonable to think that the mathematics and science involved in the climate models stands on a firmer basis than economical analysis, and definitely a firmer basis than Samuelson's analysis.

Comment author: Lumifer 03 November 2015 05:29:57PM 3 points [-]

What other way is there?

The usual plain-vanilla way is to use out-of-sample testing -- check the model on data that neither the model nor the researchers have seen before. It's common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.

In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.

The characteristics of the model's fit are not necessarily a good guide to the model's predictive capabilities. Overfitting is still depressingly common.