"For mature and well-understood economics such as that of the United States, consensus forecasts are not notably biased or inefficient. In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy."
Maybe it's the allure of alarmism, but aren't we mostly concerned with predicting catastrophe? This is kind of like saying you can predict the weather except for typhoons and floods.
Maybe it's the allure of alarmism, but aren't we mostly concerned with predicting catastrophe? This is kind of like saying you can predict the weather except for typhoons and floods.
I think the analogy goes the other way. A weather forecast that didn't cover catastrophes would still be useful. I like knowing if it's going to be rainy or sunny, wet or dry.
Similarly, I find it useful to know in a general sense which way short-term interest rates are going, how much inflation to expect over the next few years, and whether the job market is getting better or worse from quarter to quarter.
Yes, sometimes there are external shocks or surprising internal developments, but an imperfect prediction is still better than none.
Except that the shocks usually have a disproportionate effect on the economy. The forecasting is useful, but any strategy contingent upon the forecasting will have to take into the account that when your forecasts fail, it won't just be a little, it will be massive.
Macroeconomic indicators such as total GDP, GDP per capita, inflation, unemployment, etc. are reported through direct measurement every so often
Actually, such numbers are usually determined through sampling. They are also subject to definitions and methodologies change (see e.g. inflation).
consensus forecasts are not notably biased or inefficient.
What does this mean? Specifically, what are your definitions and criteria of being "biased" and "inefficient" in this context?
In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy.
Sounds like No True Scotsman :-/
Yes, you're right that it's not possible to measure everything, so sampling is often used in lieu of direct measurement. I had mentioned sampling in my earlier post.
consensus forecasts are not notably biased or inefficient.
I'm using the same definitions as used in the literature. The "bias" concept is discussed in the cited papers, plus in my earlier post http://lesswrong.com/lw/k2a/the_usefulness_of_forecasts_and_the_rationality/
The "efficiency" criterion is more difficult to define, but here it means roughly "makes use of all the available information" -- sort of synonymous with rationality.
The meanings of the terms are of course up for debate, and the different papers don't quite agree on the right meaning.
In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy.
It's certainly a flaw that they can't predict shocks, but to the extent that a few shocks explain most forecasting error, that would have different implications than if the forecasts were wrong in all sorts of small ways.
The "insufficient information" refers to the quality of existing data they have access to. In some cases, people made wrong forecasts because the data about current indicator values that they were working with had errors, or was incomplete (e.g., they didn't have information on a particular indicator value for a particular month).
The "efficiency" criterion is more difficult to define, but here it means roughly "makes use of all the available information"
How do you know? Or, more explicitly, on the basis of which evidence are you willing to make the claim that consensus macro forecasts "make use of all the available information"?
Besides, just having information is necessary but not sufficient. You also need models which will take this information as inputs and will output the forecasts. These models can easily be wrong. Is the correctness of models used included in your definition of efficiency?
It is difficult to conclusively demonstrate efficiency, but it is easy to rule out specific ways that forecasts could be inefficient. That's what the papers do.
I'm interested in forecasting, and one of the areas where plenty of forecasting has been done is macroeconomic indicators. This post looks at what's known about macroeconomic forecasting.
Macroeconomic indicators such as total GDP, GDP per capita, inflation, unemployment, etc. are reported through direct measurement every so often (on a yearly, quarterly, or monthly basis). A number of organizations publish forecasts of these values, and the forecasts can eventually be compared against the actual values. Some of these forecasts are consensus forecasts: they involve polling a number of experts on the subject and aggregating the responses (for instance, by taking an arithmetic mean or geometric mean or appropriate weighted variant of either). We can therefore try to measure the usefulness of the forecasts and the rationality of the forecasters.
Why might we want to measure this usefulness and rationality? There could be two main motivations:
My interest in the subject stems largely via (2) rather than (1): I'm trying to understand just how valuable forecasting is. However, the research I cite has motivations that involve some mix of (1) and (2).
Within (2), our interest might be in studying:
The macroeconomic forecasting discussed here generally falls in the near but not very near future category in the framework I outlined in a recent post.
Here is a list of regularly published macroeconomic consensus forecasts. The table is taken from Wikipedia (I added the table to Wikipedia).
Strengths and weaknesses of the different surveys
The history of research based on consensus forecast sources
There has been a gradual shift in what consensus forecasts are used in research studying forecasts:
There has also been a gradual shift in views about forecast accuracy:
Tabulated bibliography (not comprehensive, but intended to cover a reasonably representative sample)
Some forecasts are biased, and forecasters are not rational
The following overall conclusions seem to emerge from the literature:
Some addenda