Note: In this blog post, I reference a number of blog posts and academic papers. Two caveats to these references: (a) I often reference them for a specific graph or calculation, and in many cases I've not even examined the rest of the post or paper, while in other cases I've examined the rest and might even consider it wrong, (b) even for the parts I do reference, I'm not claiming they are correct, just that they provide what seems like a reasonable example of an argument in that reference class.

Note 2: Please see this post of mine for more on the project, my sources, and potential sources for bias.

As part of a review of forecasting, I've been looking at weather and climate forecasting. I wrote one post on weather forecasting and another on the different time horizons for weather and climate forecasting. Now, I want to turn to long-range climate forecasting, for motivations described in this post of mine.

Climate forecasting is turning out to be a fairly tricky topic to look into, partly because of the inherent complexity of the task, and partly because of the politicization surrounding Anthropogenic Global Warming (AGW).

I decided to begin with a somewhat "outside view" approach: if you were simply given a time series of global temperatures, what sort of patterns would you see? What forecasts would you make for the next 100 years? The forecast can be judged against a no-change forecast, or against the forecasts put out by the widely used climate models.

Below is a chart of four temperature proxies since 1880, courtesy NASA:

Global Surface Temperature

The Hadley Centre dataset goes back to 1850. Here it is (note that the centrings on the temperature axis are slightly different, because we are taking means of slightly different sets of numbers, but we are anyway interested only in the trend so that does not matter) (source):

HADCRUT4

Eyeballing, there does seem to be a secular trend of increase in the temperature data. Perhaps the naivest way of calculating the rate of change is to calculate (final temperature - initial temperature)/(time interval) to calculate the annual rate of change. Using that method, we get a temperature increase of about 0.54 degrees Celsius per century.

But just using final and initial temperatures overweights those two values and ignores the data in the other temperature readings. A somewhat more sophisticated approach (albeit still a pretty unsophisticated approach) is a linear regression model. I was wondering whether I should download the data and run a linear regression, but I found a picture of the regression online (source):

 

Linear regression for temperatures

Note that the regression line starts off a little lower than the actual temperature in 1850, and also ends a little lower than the actual temperature in the 2000s. The rate of growth seems even less here (about 0.4 degrees Celsius per century). The reason the regression gives a lower rate than simply using initial and final temperatures is that the temperature growth since the 1970s has been well above trend, and those well-above-trend temperatures are given more weight if we just use final temperature than if we fit to a regression line.

Linear plus periodic?

Another plausible story that seems to emerge from eyeballing the model is that the temperature trend is the sum of an approximately linear trend and a periodic trend, given by something like a sine wave. I found one analysis of this sort by DocMartyn on Judith Curry's blog, and another in a paper by Syun Akasofu (note: there seem to be some problems with both analyses; I am linking to them mainly as simple examples of the rough nature of this sort of analysis, not as something to be taken very seriously). Note that both of these do more complicated things than look purely at temperature trends. While DocMartyn explicitly introduces carbon dioxide as the source of the linear-ish trend, Akasofu identifies "recovery from the little Ice Age" as the source of the linear-ish trend and the Pacific Decadal Oscillation as the source of the sinusoidal trend (but as far as I can make out, one could use the same graph and argue that the linear trend is driven by carbon dioxide).

Here's DocMartyn's forecast:

DocMartyn's forecast

Here's Akasofu's picture:

akasofu

Autocorrelation and random walks

Simple linear regression is unsuitable for time series forecasting for variables that exhibit autocorrelation: the value in any given year is correlated to the value the previous year, independent of any long-term trend. As Judith Curry explains here, autocorrelation can create an illusion of trends even when there aren't any. (This may seem a bit counterintuitive: if only temperature levels, and not temperature trends, exhibit the autocorrelation, i.e., if temperature is basically a random walk, then why should we see spurious trends? So read the whole post). Not only can apparent spurious linear-looking trends be detected, so can apparent spurious cyclical trends (see here).

Unfortunately, I don't have a good understanding of the statistical tools (such as ARIMA) that one would use to resolve such questions. I am aware of a few papers that have tried to demonstrate that, despite the appearance of a linear trend above, the temperature series is more consistent with a random walk model. See, for instance, this paper by Terence Mills and the literature it references, many of which seem to come to conclusions against a clear linear trend. Mills also published a paper in the Journal of Cosmology here that's ungated and seems to cover similar ground, but the Journal of Cosmology is not such a high-status journal, so the publication of the paper there should not be treated as giving it more authority than a blog post.

Linear increase is consistent with very simple assumptions about carbon dioxide concentrations and the anthropogenic global warming hypothesis

Here's a simple model that would lead to temperature increases being linear over time:

  • The only secular trend in temperature occurs from radiative forcing due to a change in carbon dioxide concentration.
  • The additive increase in temperature is proportional to the logarithm of the multiplicative increase in atmospheric carbon dioxide concentration (Wikipedia).
  • About 50% of carbon dioxide emissions from burning fossil fuels is retained by the atmosphere. The magnitude of carbon dioxide emissions is proportional to world GDP, which is growing exponentially, so emissions are growing exponentially, and therefore, the total carbon dioxide concentration in the atmosphere is also growing exponentially.

Apply a logarithm to an exponential, and you get a linear trend line in temperature.

(As we'll see, while this looks nice on paper, actual carbon dioxide growth hasn't been exponential, and actual temperature growth has been pretty far from linear. But at least it offers some prima facie plausibility to the idea of fitting a straight line).

Turning on the heat: the time series of carbon dioxide concentrations

So how have carbon dioxide concentrations been growing? Since 1958, the Mauna Loa observatory in Hawaii has been tracking atmospheric carbon dioxide concentrations. The plot of the concentrations is termed the Keeling curve. Here's what it looks like (source: Wikipedia):

Keeling curve

The growth is sufficiently slow that the distinction between linear, quadratic, and exponential isn't visible to the naked eye, but if you look carefully, you'll see that growth from 1960 to 1990 was about 1 ppm/year, whereas growth from 1990 to 2010 was about 2 ppm/year. Unfortunately the Mauna Loa data go back only to 1958. But there are other data sources. In a blog post attempting to compute equilibrium climate sensitivity, Jeff L. finds that the 1832-1978 Law Dome dataset does a good job of matching atmospheric carbon dioxide concentration values with the Mauna Loa dataset for the period of overal (1958-1978), so he splices the two datasets for his  (note: commenters to the post pointed out many problems with it, and while I don't know enough to evaluate it myself, my limited knowledge suggests that the criticisms are spot on; however, I'm using the post just for the carbon dioxide graph):

law dome

Note that it's fairly well-established that carbon dioxide concentrations in the 18th century, and probably for a few centuries before that, were about 280 ppm. So even if the specifics of the Law Dome dataset aren't reliable, the broad shape of the curve should be similar. Notice that the growth from 1832 to around 1950 was fairly slow. In fact, even from 1900 to 1940, the relatively fastest-growing part of the period, carbon dioxide concentrations grew by only 15 ppm in 40 years. From what I can judge, there seems to have been an abrupt shift around 1950, to a rate of about 1 ppm/year. A linear or exponential curve doesn't explain the shift. And as noted earlier, the rate of growth seems to have gone up a lot around 1990 again, to about 2 ppm/year. The reason for the shift around 1950 is probably post-World War II global economic growth, including industrialization in the now-becoming-independent colonies, and the reason for the shift around 1990  is probably the rapid take-off of economic growth in India, combined with the acceleration of economic growth in China.

To the extent that the AGW hypothesis is true, i.e., the main source of long-term temperature trends is radiative forcing based on changes to carbon dioxide concentrations, perhaps looking for a linear trend isn't advisable, because of the significant changes to the rate of carbon dioxide growth over time (specifically, the fact that carbon dioxide concentrations don't grow exponentially, but have historically exhibited a piecewise growth pattern). So perhaps it makes sense to directly regress temperature against the logarithm of carbon dioxide concentration? Two such exercises were linked above: DocMartyn on Judith Curry's blog, and a blog post attempting to compute equilibrium climate sensitivity by Jeff L. Both seem like decent first passes but are also problematic in many ways.

One of the main problems is that the temperature response to carbon dioxide concentration changes doesn't all occur immediately. So the memoryless regression approach used by Jeff L., that basically just asks how correlated temperature in a given year is with carbon dioxide concentrations in that year, fails to account for the fact that temperature in a given year may be influenced by carbon dioxide concentrations over the last few years. Basically, there could be a lag between the increase in carbon dioxide concentrations and the full increase in temperatures.

Still, the prima facie story doesn't seem to be boding well for the AGW hypothesis:

  • Carbon dioxide concentrations have not only been rising, they've been rising at an increasing rate, with notable changes in the rate of increase around 1950 and then again around 1990.
  • Temperature exhibits fairly different trends. It was about flat from 1945-1978, then grew very quickly around 1978-1998, and then has been about flat (with a very minor warming trend) 1998-present.

So, even a story of carbon dioxide with lag doesn't provide a good fit for the observed temperature trend.

There are a few different ways of resolving this. One is to return to the point made earlier about how the actual temperature is a sum of the linear trend (driven by greenhouse gas forcing) plus a bunch of periodic trends, such as those driven by the PDO, AMO, and solar cycles. This sort of story was described by DocMartyn on Judith Curry's blog and in the paper by Syun Akasofu referenced above.

Another common explanation is that the 1945-1978 non-warming (and, according to some datasets, moderate cooling) is explained by the increased concentration of aerosols that blocked sunlight, and that therefore canceled the warming effect of carbon dioxide. Indeed, in the early 1970s, there were concerns about global cooling due to aerosols, but there were also a few voices that noted that over the somewhat longer run, as aerosol concentrations were controlled better, the greenhouse effect would dominate and we'd see rapid temperature increases. And given the way temperatures unfolded in the 1980s and 1990s, the people who were calling for global warming in the 1970s seemed unusually prescient. But the pause (or at any rate, significant slowdown) in warming after 1998, despite the fact that the rate of carbon dioxide emissions has been accelerating, suggests that there's more to the story than just aerosols and carbon dioxide.

UPDATE: Some people have questioned whether there was a pause or slowdown at all, and whether using 1998 as a start year may be misguided because it was an unusually hot year due to a strong El Nino. 1998 was unusually hot, and indeed the lack of warming relative to 1998 for the next few years was explainable in terms of 1998 being an anomaly. But the time period since then is sufficiently long that the slowness of warming can't just be explained by 1998 being very warm. For a list of the range of explanations offered for the pause in warming, see here.

Should we start using actual climate science now?

The discussions above were very light on both climate science theory and heavybrow statistical theory. We just looked at global temperature and carbon dioxide trends, eyeballed the graphs, and tried to reason what sort of growth patterns were there. We didn't talk about what the theory says, what independent lines of evidence there are for it, what sort of other indicators (such as regional temperatures) might be used to test the theory, and what historical (pre-1800) data can tell us.

A more serious analysis would consider all these. But here is what I believe: if a more complicated model cannot consistently beat out simple models such as those based on persistence, random walk, simple linear regression, random walk with drift, etc., then the model has not really arrived at prime time for forecasting. There may still be insights to be gleaned from the model, but its ability to forecast the future is not one of its selling points.

The history of climate modeling so far suggests that such success has been elusive (see this draft paper by Kesten C. Green, for instance). In hindsight from a 1990s vantage point, those in the 1970s who bucked the "global cooling" trend and argued that the greenhouse effect would dominate seemed very prescient. But the considerable slowdown of warming starting around 1998, even as carbon dioxide concentrations grew rapidly, took them (and many others) by surprise. We should keep in mind that there are many stories in financial markets of trading strategies that appear to have been successful for long periods of time, far exceeding what chance alone might suggest, but then suddenly stop working. The financial markets are different from the climate (in that there are humans competing, and eating away at each other's strategies) but the problem still remains that something (like "the earth is warming") may have been true over some decades for reasons quite different from those posited by people who successfully predicted them.

Note that even without the ability to make accurate or useful climate forecasts, many tenets of the AGW hypothesis may hold, and may usefully inform our understanding of the future. For instance, it could be that the cyclic trends and sources of random variation are bigger than we thought, but the part of the increase in temperatures due to increasing carbon dioxide concentrations (measured using the transient climate response or the equilibrium climate sensitivity) is still quite large. Which basically means we will see (large increase) + (large variation). In which case the large increase still matters a lot, but would be hard to detect using climate forecasting, and would be hard to use to make better climate forecasts. But if that's the case, then it's important to be all the more sure of the other lines of evidence that are being used to attain the equilibirum climate sensitivity estimate. More on this later.

Critique of insularity

I want to briefly mention a critique offered by forecasting experts J. Scott Armstrong and Kesten Green (I mentioned both of them in my post on general-purpose forecasting and the associated community). Their Global Warming Audit (PDF summary, website with many resources) looks at many climate forecasting exercises from the outside view, and finds that the climate forecasters pay little attention to general forecasting principles. One might detect a bit of a self-serving element here: Armstrong isn't happy that the climate forecasters are engaging in such a big and monumental exercise without consulting him or referring to his work, and an uncharitable reading is that he is feeling slighted at being ignored. On the other hand, if you believe that the forecasting community has come up with valuable insights, their critique that climate forecasters didn't even consider the insight obtained by the forecasting community in their work is a fairly powerful criticism. (Things may have changed somewhat since Armstrong and Green originally published their critique). Broadly, I agree with some of Amstrong and Green's main points, but I think their critique goes overboard in some ways (to quite an extent, I agree with Nate Silver's treatment of their critique in Chapter 12 of The Signal and the Noise). But more on that later. Also, I don't know how representative Armstrong and Green are of the forecasting community in their view on the state of climate forecasting.

I have also heard anecdotal evidence of similar critiques of insularity from statisticians, geologists, and weather forecasters. In each case, the claim has been that the work in climate science relied on methods and insights better developed in the other disciplines, but the climate scientists did not adequately consult experts in those domains, and as a result, made elementary errors (even though these errors may not have affected their final conclusions). I currently don't have a clear picture of just how widespread this criticism is, and how well-justified it is. I'll be discussing it more in future posts, not so much because it is directly important but because it gives us some idea of how authoritative to consider the statements of climate scientists in domains where direct verification or object-level engagement is difficult.

Looking for feedback

Since I'm quite new to climate science and (largely, though not completely) new to statistical analysis, it's quite possible that I made some elementary errors above. Corrections would be appreciated.

It should be noted that when I say a particular work has problems, it is not a definitive statement that that work is false. Rather, it's simply a statement of my impression, based on a cursory analysis, that describes the amount of credibility I associate with that work. In many cases, I'm not qualified enough to offer a critique with high confidence.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 10:53 AM

What about sea temperatures? From what I've seen, including that fills in the missing heat.

Thanks for the pointer. I'll look into this and discuss it in more detail in my subsequent post about the mechanisms and theories involved.

I took a look at the deep ocean. It seems like one of the more promising theories of the warming pause. I'm not in a position to have a personal opinion of whether it is the correct theory. I'll probably touch on this more in one of my forthcoming posts.

The history of climate modeling so far suggests that such success has been elusive (see this draft paper by Kesten C. Green, for instance). In hindsight from a 1990s vantage point, those in the 1970s who bucked the "global cooling" trend and argued that the greenhouse effect would dominate seemed very prescient. But the considerable slowdown of warming starting around 1998, even as carbon dioxide concentrations grew rapidly, took them (and many others) by surprise.

First of all, there doesn't seem to have been a "global cooling trend" among scientists in the 1970s - if there was any kind of consensus, its that warming was more likely than cooling. (or I just misunderstood what you meant by "global cooling trend"?)

Second, for the lack of warming since 1998, isn't that already explained by El Niño cycles? It peaked in '98, and was one of the strongest ever recorded. It makes later data look less impressive by comparison. I haven't followed any of the links though so I don't know if this was taken into account in the "no warming since '98" stories.

Then again, if this wasn't predicted by folks at the time then it would count as evidence against the reliability of models. How much did we know about El Niño then?

Second, for the lack of warming since 1998, isn't that already explained by El Niño cycles? It peaked in '98, and was one of the strongest ever recorded. It makes later data look less impressive by comparison. I haven't followed any of the links though so I don't know if this was taken into account in the "no warming since '98" stories.

Yes, 1998 was an unusually warm year due to the El Nino, and that would have made the next few years look less warm by comparison, but it's not enough to explain the 15 years since then with a fairly small rate of warming.

The pause in warming is actually a widely acknowledged issue and many papers have been published about it, see for instance http://fabiusmaximus.com/2014/01/17/climate-change-global-warming-62141/

Also, note that El Nino is a seasonal phenomenon. At the decadal level, the things that matter are probably PDO/AMO/solar activity, in addition to greenhouse gas forcing.

(There are some claims about some kind of relationship between El Nino frequency and PDO phase, but I wasn't really able to get a good understanding of the overall state of current research).

First of all, there doesn't seem to have been a "global cooling trend" among scientists in the 1970s - if there was any kind of consensus, its that warming was more likely than cooling. (or I just misunderstood what you meant by "global cooling trend"?)

It's true that the scientific literature had already started moving in the direction of warming, but my understanding is that the popular/mainstream impression of the science (which was a few years behind) was still centered around global cooling. It was nowhere close to the level of agreement that we see on global warming today, but it was a relatively mainstream and apparently well-founded explanation of events then. The academic balance appears to have started shifting in the 1970s, and the balance in popular circles may have taken a few years to catch up.

Quote:

In July 1971, Stephen Schneider, a young American climate researcher then at NASA’s Goddard Space Flight Centre in New York , made headlines in the New York Times when he warned of a coming cooling that could “trigger an ice age”. Soon after, George Kulka, a respected climatologist from the Czech Academy of Sciences, warned on TV in the US that “the ice age is due now any time”. The US National Academy of Sciences reported “a finite probability that a serious worldwide cooling could befall the Earth within the next 100 years”. As a hint of the horrors in store, weird weather in Africa led to a drought in the Sahel that starved millions. Today we tend to blame global warming for African droughts; back then, many blamed the cold. Climate scientists called for action to halt the cooling. They included Fred Singer, first director of the US National Weather Satellite Service and today a well-known contrarian on global warming. Hubert Lamb, then in the process of establishing the Climatic Research Unit at the University of East Anglia, was of a similar view. The then editor of New Scientist magazine, Nigel Calder, adopted the same cause, making a TV programme called The Weather Machine that featured Lamb saying: “We should be preparing ourselves for a long period of mainly colder seasons… The little ice age lasted more than 300 years .” And Newsweek gave a cover feature spot to an analysis of “the cooling world” written by science journalist, and sometime New Scientist freelance, Peter Gwynne. Advisers to the Nixon administration in Washington DC proposed putting giant mirrors into orbit to direct more sunlight onto Earth. Australians proposed painting their coastline black to raise temperatures. Others suggested sprinkling Himalayan glaciers with soot to absorb heat and maintain the ice-melt that feeds the region’s rivers. What prompted this panic? Three decades of evident, if mild, cooling had set the scene. But there was also genuine concern among climate scientists based on predictions of both natural and man-made climate change. For one thing, the atmosphere was becoming dustier and filling with fine, light-scattering particles that were shading the planet’s surface and, some suspected, causing the cooling. Reid Bryson of the University of Wisconsin-Madison argued that dust storms caused by farms spreading into more arid lands were mostly to blame. Others blamed urban smogs.

Schneider tried to calculate the likely cooling effect of this man-made air pollution. He compared it with the possible warming effect of carbon dioxide emissions, which it was by then clear were also accumulating in the atmosphere. In Schneider’s early calculations, published in Science in 1971, the cooling effect was dominant. He said dust and sulphurous smog particles might have doubled since 1900 and could double again in the coming 50 years. Even allowing for warming from carbon dioxide, this could still mean a drop in global temperatures of 3.5C, which, “if sustained over a period of several years… is believed to be sufficient to trigger an ice age”. At the same time, research into the history and timing of past ice ages had found that there were many more than the four originally guessed at, their appearance driven by regular planetary wobbles. Worse, it was now clear that ice ages were the norm rather than the exception. According to Kulka, the most recent interval between ice ages appeared to have lasted only 5,000 years. Our present interglacial had already lasted 10,000 years. An ice age was long overdue. Perhaps pollution was already triggering its onset. The early 1970s also saw the first analysis of Greenland ice cores, and with it the suggestion that climate could change very fast. The last ice age may have taken hold within as little as a century. So the cooling in the mid-20th century might not have been a short-term blip but the start of a rapid slide into the next global freeze. Some climate scientists say today that the fad for cooling was a brief interlude propagated by a few renegade researchers, or even that the story is a myth invented by today’s climate sceptics. Not so. There was , as Thomas Petersen of the National Oceanic and Atmospheric Administration (NOAA) showed in a detailed analysis in 2008, no consensus on global cooling. But equally, there was good science behind the fears. So why did Schneider and his fellow ice warriors get the prognosis so wrong? One reason is that some of the calculations published with great fanfare were simply incorrect. Soon after his 1971 paper came out, Schneider realised he had greatly overestimated the future cooling effect from human-made aerosols. He had assumed that the increased concentrations of particles that he had measured in the air applied globally. They did not; they related only to small areas close to their source. Moreover, much of the particle load in the atmosphere turned out to be natural, so that even if emissions from human sources doubled in the coming 50 years, their effect would be much smaller than he had calculated. Schneider also realised that he had underestimated the likely warming effect of carbon dioxide: it would be three times as great as he first calculated. When he redid the maths, the balance between warming and cooling now tipped strongly towards warming. In 1974, he published a retraction of his earlier prognosis – “just like honest scientists are supposed to do”, he says today. The science of ice ages has also advanced since. The planetary wobbles that periodically tip the world into ice ages are not identical, so some interglacial periods last longer than others. Good theoretical work now shows that the current interglacial is likely to be unusually long. Most climate scientists now agree that the cold decades from the 1940s to 1970s had little to do with either man-made pollution or planetary wobbles. The mid-century cooling was mostly associated with two natural phenomena: first the eruption of a cluster of medium-sized volcanoes that pumped sunlight-scattering sulphate particles into the upper air , and second ocean oscillations such as the Pacific Decadal Oscillation, a kind of slow-motion El Niño that moved heat out of the atmosphere and into the oceans.

Pearce, Fred (2012-10-14). The Climate Files: The battle for the truth about global warming (Kindle Locations 428-472). . Kindle Edition.

George Kulka, a respected climatologist from the Czech Academy of Sciences, warned on TV in the US that “the ice age is due now any time”.

I think this is emblematic how the story went. Kulka was a paleoclimatologist - he studied the cycles of ice ages, and was pointing out that we're overdue for an ice age. We might read something like "due now any time" and think "oh god let's stock the ice age shelter," but the scale of ice age cycles is tens of thousands of years - "any time" to a paleoclimatologist means "next thousand years maybe."

Any news stories forecasting an ice age within the lifetime of anyone alive were about as scientifically sound as the movie The Core was about the cycle of Earths' magnetic field.

If the models depend on factors which cannot be reliably forecast (e.g. "PDO, AMO, and solar cycles" above), then it is a bit of a fake explanation and you can't use them as reliable inputs to a forecast model. Would it be it reasonable to use Akasofu's sine-wave extrapolation of the multi-decadal oscillation in light of the prior two observed "cycles" ?

Also the Pacific Decadal Oscillation and the Atlantic Multidecadal Oscillation indices are measures of the response of the system, and treating them as a driver of the system smuggles some of the dependent response variables into the supposedly independent predictor variables.

Akasofu's picture:

Is the lower red dotted line also a prediction in 2000, like the IPCC, or is it an after the fact reconstruction?

If a prediction, it would have predicted the current warming pause, which would be impressive. However, I get the impression it wasn't a future prediction,.

The paper was submitted and accepted in 2013, and Akasofu was probably aware of the temperatures 2000-2013. So even if he didn't use that data explicitly in getting the model parameters, it could probably have influenced his selection of the model etc.

The real test of any of these models will be based on data in coming years. In these sorts of things, I think it's best not to be impressed unless the model predicted something we can be sure the authors of the model had no measurement of when the model was constructed or the parameters tuned.