This reminds me of an article I read recently (and cannot find for the life of me) about the calibration of this type of model. Essentially, the author was pointing out that the curves fitted to past data to "train" these models frequently have more degrees of freedom than there are data points in the training set. For those of you who aren't familiar with curve-fitting, this means there is literally an infinite number of curves that can be fit to the data, giving a small probability of your algorithm finding one that models the future with an acceptable degree of accuracy.
I'll try and find the article again so I can link it.
EDIT: The article can be found here. It focuses on economic modeling, but the basic techniques are the same as those used in many other fields (including meteorology and climate science).
From Cafe Hayek (original): Two meteorologists have announced that they will stop using certain forecast methods, even though they've used them for 20 years.
There's a correction at the end of the article, too!