Prelude: Climate change, in particular the question of anthropogenic global warming (AGW) is both an intellectually complex and a politically loaded topic. Politics has been called the mind-killer here. For a mix of both reasons (the intellectual complexity and the political loadedness), I hope to approach the issue in steps: I'll first lay out my (probably quite flawed, but hopefully still broadly correct) understanding of the scientific questions, and in subsequent posts, I'll tackle some of the trickier and more controversial questions. I'd appreciate any error corrections -- it'll help improve the accuracy of my subsequent posts.

In a previous post, I discussed weather forecasting through numerical weather simulation. With numerical weather simulation, we first construct a series of equations using the laws of physics that describe the evolution of the weather system. Then, we discretize the system in space and time (we break up the spatial region into a grid and we break the time into discrete time steps). We compute the evolution of the discretized system numerically. We tackle uncertainty in the measurement by computing several alternate scenarios and assigning probabilities to them.

Is this the way we predict long-term climate? Sort of, but not quite. The equations describing the evolution of the system are the same for weather and climate, and the only thing that's different in principle is the longer timescale. However, some mechanisms matter a lot in the short term, and others matter more in the long term.

Six time horizons for weather forecasting

There are qualitative differences between the challenges of forecasting for different time horizons. The set of time horizons spans a continuum, but to simplify the discussion, I'll identify five different types of time horizons:

  • The very near future, i.e., the next half an hour to 2 hours. Weather forecasting for this time frame is sometimes called nowcasting.
  • The period ranging from the next 1-2 days. This is sometimes called short-range weather forecasting and is generally quite reliable.
  • The period ranging 3-14 days from the present. This is short-to-medium-range weather forecasting. The weather forecasts for up to a week show forecast skill (relative to the benchmarks of persistence and climatology), but the 7-14 day period is still being worked on, and forecast skill here is relatively small. Naive numerical weather simulations often show negative forecast skill, i.e., they do worse than climatology. However, multimember and multimodel ensemble forecasting can beat climatology by a small margin.
  • Seasonal-to-interannual (SI) forecasting. This involves forecasting the seasons in the coming year and the year after that. Predictions are generaly vague, and are of the form "the average temperature this summer will be 0.1 degrees Celsius hotter than the historical average." This straddles the line between weather and climate forecasting: numerical weather simulation methods used for short-range and medium-range weather forecasting become too tricky, and the oceans start mattering more than the atmosphere.
  • Decadal forecasting. This involves forecasting over a time period ranging from a few years out to a few decades ago.
  • Centennial forecasting. This involves forecasting over the next century.

Three sources of uncertainty

NASA scientist and Real Climate blogger Gavin Schmidt identifies three sources of uncertainty in climate forecasting, as described by Nate Silver in Chapter 12 of The Signal and the Noise:

  • Initial condition uncertainty: This form of uncertainty dominates short-term weather forecasts (though not necessarily the very short term weather forecasts; it seems to matter the most for intervals where numerical weather prediction gets too uncertain but long-run equilibrating factors haven't kicked in). Over timescales of several years, this form of uncertainty is not influential.
  • Scenario uncertainty: This is uncertainty that arises from lack of knowledge of how some variable (such as carbon dioxide levels in the atmosphere, or levels of solar radiation, or aerosol levels in the atmosphere, or land use patterns) will change over time. Scenario uncertainty rises over time, i.e., scenario uncertainty plagues long-run climate forecasts far more than it plagues short-run climate forecasts.
  • Structural uncertainty: This is uncertainty that is inherent to the climate models themselves. Structural uncertainty is problematic at all time scales to a roughly similar degree (some forms of structural uncertainty affect the short run more whereas some affect the long run more).

Different sorts of uncertainty emerge at different timescales: the atmosphere versus the ocean

Short-range weather forecasting (and most of medium-range weather forecasting, as far as I understand) basically involves modeling the behavior of the atmosphere. The standard approach of numerical weather simulation discretizes the three spatial dimensions of the atmosphere and chooses a discrete time step, then runs a simulation to figure out how the atmosphere will evolve.

Long-range weather and climate forecasting, ranging from SI forecasting to decadal forecasting to centennial forecasting, involves modeling the behavior of the oceans.

Why the distinction? The oceans have a thousand times the thermal capacity of the atmosphere, and they obviously contain a lot more of the water, so one would expect them to play a bigger role in temperature and precipitation over longer timescales. But the oceans also equilibrate more slowly. Some of the stabilizing currents in the ocean take centuries. The atmosphere is much more fast-moving. Thus, variation in the atmosphere dominates over shorter timescales. In particular, the initial conditions that matter in the short run are the initial conditions of the atmosphere, whereas the initial conditions that matter on the SI or decadal timescale are the initial conditions of the ocean. More information is in this overview provided by the UK Met Office.

Of course, the oceans aren't acting alone, and long-term changes to atmospheric composition (in particular, the increase in atmospheric concentrations of greenhouse gases such as carbon dioxide) can have significant effects on the climate. So what we need is a model (preferably a numerical simulation, though we might begin with statistical models) that considers the evolution of both the atmospheric and the ocean system, and the interaction between them. Such models are termed coupled models (i.e., coupled atmosphere-ocean models). The general term for the types of models used in long-range weather and climate prediction is general circulation model, so we'll call the coupled ones coupled general circulation models or coupled GCMs (as opposed to purely atmospheric GCMs).

SI Forecasting: of hot boys and cool girls

When it comes to Seasonal-to-Interannual forecasting, we have two possible benchmarks:

  • Previous year's seasonal weather.
  • Historical average climate for that season.

The forecast skill of any model can be measured in relation to either of these two benchmarks.

So what can a SI forecasting model do to improve on historical climate? Initial atmospheric conditions can have ballooning effects over short time ranges such as a week or two weeks, but over a month or two, we expect them to equilibrate. In other words, initial atmospheric conditions probably add little signal to our ability to predict the average temperature for forthcoming seasons. But ocean conditions do matter: there are seasonal currents in the ocean (and wind patterns that these ocean currents cause) and we can use the current condition of the oceans to make educated guesses about whether how the currents in coming seasons will differ from historical averages.

An example is the El Niño Southern Oscillation (ENSO) in the Pacific Ocean (off the South American coast). I actually don't understand the details much, but my rough understanding is that there are two phases: the warm water phase, called El Nino (Spanish for "the boy" and intended as a reference to Jesus Christ) and the cold water phase La Nina (Spanish for "the girl" and named as such simply as an appropriate counter-name to El Nino). When El Nino conditions prevail, they also cause a corresponding movement in the atmosphere called the Southern Oscillation (hence the name ENSO) and overall, we get warmer weather than we otherwise would. When La Nina conditions prevail,  we get colder weather than we otherwise would. Successful prediction of whether a particular year will see a strong El Nino can help determine whether the weather will be warmer than usual. For instance, it's believed that a strong El Nino will develop this year, leading to warmer weather than usual (see here for instance; the canonical source for El Nino forecasts is the NOAA page, which, per the most recent update, forecasts a 70% probability of El Nino conditions this summer and a 80% probability of El Nino conditions this fall/winter).

For an overview on seasonal-to-interannual forecasting, see here.

Decadal forecasting

We noted above that the atmospheric conditions matter over the range of a few hours to a few weeks but the oceans have a longer memory. But even within the oceans, there are different types of currents and different phases and oscillations. At the very extreme are the stabilizing deep ocean currents, that take about a thousand years to run their course. But more relevant for decadal forecasting are the decadal and multidecadal oscillations. In particular, two oscillations are of particular importance:

  • Pacific decadal oscillation (PDO): This is linked to the ENSO, but unlike the ENSO, which lasts for a short while, the PDO cycle is measured in decades (I couldn't get a clear picture about whether there is any regularity to the PDO cycle. Perhaps the issue hasn't been settled). The positive phase of the PDO is linked to warmer weather (similar to El Nino), and the negative phase of the PDO is linked to cooler weather (similar to La Nina).
  • Atlantic multidecadal oscillation (AMO)

Apart from the oceans, two other factors that matter at the decadal level are atmospheric composition (specifically, greenhouse gas concentrations, since they affect the level of warming) and solar activity. Solar activity has its own cycles and phases, and therefore is (or might be) moderately predictable over the decadal timescale. Greenhouse gas concentrations don't change too fast, relative to the levels they already are, so they too can be predicted with reasonable confidence on the decadal timescale without needing to consider different scenarios about changes to emissions levels.

Finally, there are unpredictable events that can affect climate over decadal timescales. The classic example is volcanic eruptions. However, these are by nature unpredictable, so they limit the potential predictability of climate on a decadal timescale. Forecasts may be prepared conditional to the occurrence of such events, in addition to an unconditional forecast that assumes no such events.

For more information, see this overview provided by the UK Met Office or this overview of whether decadal forecasting can be skillful.

In what ways is decadal forecasting different from century-long forecasting and scenario analyses of the sort seen in IPCC reports?

As far as I can understand:

  • Decadal forecasting is more sensitive to the initial condition of the oceans, in particular, the phases of the PDO and AMO.
  • Very little of the uncertainty in decadal forecasting arises from uncertainty in estimates of the amount of carbon dioxide emissions over the coming years. This is because (a) it's unlikely that emissions will change drastically in a few years, (b) the amount of additional accumulated carbon dioxide over a few years would be quite small and have little effect on temperature predictions. Therefore, creating different scenarios for emission levels or other changes in human activity is unnecessary for forecasting at the decadal timescale. But obviously these become quite important at the centennial timescale.

Some terminology

If you plan to read stuff on weather and climate, you might encounter some terms that have technical meanings that are slightly more specific than you might naively expect. I'm listing a few below.

  • Forcing (see here) refers to a change in the equilibrium weather or climate pattern due to something from outside the system. Examples of forcing include greenhouse gas forcing due to human emissions, forcing due to changes in solar activity, or forcing due to a volcanic eruption. This is contrasted with natural variability (which itself may be predictable or unpredictable depending on how reliably periodic it is).
  • Initialization of a climate model (see here) refers to setting the initial values of variables in the model. For models that are used to make reliable forecasts, correct initialization matters. The variables for which correct initialization matters more depend on the time horizon over which we are forecasting. For forecasting over the SI or decadal timescale, initialization of the oscillatory phases of ocean currents matters, but it may not matter for the centennial timescale.
  • Data assimilation (see here) refers to the process by which a climate model learns from existing data and observations of the current or past climate.
  • Hindcast (Wikipedia) refers to a weather or climate forecast (using a model) for a historical period for which we already have climate data. The idea is that the hindcast is made without using the climate data it is trying to predict, and the accuracy of the hindcast can then be judged against the actual values. This allows us to estimate the forecast skill without having to wait several years. Hindcasting becomes more important for longer timescales, where we simply can't afford to run repeat experiments with actual forecasting. However, hindcasting suffers the problem that it's difficult to enforce the norm that the generation of hindcasts should be made without allowing the model to look at the data it is trying to predict. This becomes more of an issue for long-range forecasting, because even if the model does not explicitly use the data it is trying to predict, the researchers working on the model are implicitly aware of the information. For instance, a researcher working on a model that will be tested to produce a hindcast of the period 1985-1995 already knows what the climate in those years was like (if the researcher knows climate science at all). This problem is less pronounced for short-range forecasting, because a weather forecaster can credibly claim to not have known the weather for the particular region and day that his or her model hindcasted.

 

New Comment
1 comment, sorted by Click to highlight new comments since:

When it comes to Seasonal-to-Interannual forecasting, we have two possible benchmarks: Previous year's seasonal weather. Historical average climate for that season.

Well you could also add a correction for the measured trend over a longer time period. For example, one can observe that temperature has been generally trending upwards since around 1900, i.e. as the Earth has emerged from the Little Ice Age. In making a basic benchmark prediction, it's reasonable to assume that this trend will continue.

This becomes more of an issue for long-range forecasting, because even if the model does not explicitly use the data it is trying to predict, the researchers working on the model are implicitly aware of the information

I agree, and there also is the file drawer problem, i.e. if a simulation doesn't match history it will be quietly discarded. So when you are presented with a simulation which does match history, you don't know how many simulations were discarded to get to that one. So you don't know how impressive it is that the simulation matches history. Until of course you wait for a few years, observe that the simulation diverges wildly, and conclude that its beautiful fit with history is not very impressive at all.