One might expect that over time economic forecasters should converge in methodologies, and thus in forecasts, as the competitive market for forecasters creates a natural selection that weeds out unsuccessful strategies and pushes forecasters towards the most successful strategy. However there is still a wide divergence in economic forecasts, which begs the question what part of that natural selection is failing?

One could argue that there is no “most successful” strategy, and that the best model shifts in random in unpredictable ways. But I don’t think actual models change fast enough to account for differences in forecasts.  You could also argue that some forecasting strategies are not public knowledge as forecasters keep them secret. This is  probably true to some extent, but I doubt if everyone had perfectly transparent models we would observe this convergence.

A third, and I believe most important, explanation is the failure of the  assumption that the competitive market for forecasters creates incentives to have the most accurate forecasts possible. A great example of the lack of a direct relationship between accuracy and incentives is Nouriel Roubini, whose forecasting was discussed in a recent article by Joe Keohane in the Boston Globe:

…since he called the Great Recession, he has become about as close to a household name as an economist can be… He’s been called a seer, been brought in to counsel heads of state and titans of industry — the one guy who connected the dots…. He’s a sought-after source for journalists, a guest on talk shows… With the effects of the Great Recession still being keenly felt, Roubini is everywhere.

But here’s another thing about him: For a prophet, he’s wrong an awful lot of the time. In October 2008, he predicted that hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock. That didn’t happen…

It goes on with several other big forecasts from Roubini that failed to materialize. The article goes on to argue that the market rewards forecasts who predict extreme events while still being more wrong overall. There is some sense to this, since businesses and governments should be disproportionately concerned about extreme outcomes. But this would call for forecasts who can identify extreme outcomes with a non-trivial probability of occurring and also accurately assess that probability, not ones who constantly exaggerate the probability of extreme events.

Via a tweet from Donald Marron (@dmarron) comes a study that supports the notion that forecasters are not trying to minimize forecast errors. The paper, by Owen Lamont, argues the following:

There is significant anecdotal evidence that indicates forecasters are not paid according to their mean squared error. Forecasters seek to enhance their reputation, manipulate perceptions of their quality, and use their forecasts in various ways unrelated to the minimization of mean squared error.

For example, one competitive strategy of forecasters Lamont identified is the “broken clock strategy”:

One practice is the “broken clock” strategy, which consists of always forecasting the same event. An example in the sample is A. Gary Shilling, a well-known recession-caller. Throughout the 1980s, Shilling continually predicted recession. In 15 out of 18 Wall Street Journal surveys in which he participated 1981–1992 (data which are not used elsewhere in this paper), his year-ahead long-bond yield projection was the lowest among all forecasters.

How does this strategy of almost always being wrong benefit Shilling? Lamont quotes a Wall Street Journal article that shows how he was able to market this unsuccessful track record:

A. Gary Shilling & Co., an economic consultant and investment strategist, recently mailed clients material that included a copy of a Wall Street Journal article with a paragraph showing that Mr. Shilling had made the best forecast of 30 years treasury bonds in a survey published about a year ago; but he covered up a paragraph noting that Mr. Shilling was tied for last place with his bond forecast of 6 months ago

Lamont goes on to prevent empirical evidence that forecasters are maximizing perceived reputation rather than minimizing forecast error. Importantly though, he points out that this may not be a socially inefficient outcome if it helps prevent herding behavior. Ideally each forecaster would produce very similar point estimates, but also wide confidence bands reflecting the uncertainty. In this scenario convergence wouldn’t be socially inefficient, as the confidence bands would reflect the probability of extreme outcomes. One could argue that we are in a second-best world in which the distribution of forecasters reflects the uncertainty, and forecasters like Roubini are just providing us with 95% confidence bands. If this is the case it would be more satisfying, to me at least, if the media understood Roubini as “most likely wrong” rather than as some prophet or seer.

About these ads