The Bank of Canada gives itself an A- for forecasting
“First, do no harm” is Rule One in medicine. Until the last couple of centuries, it’s not clear most doctors achieved that goal. Before the mid-19th century, when infection came to be understood, doctors were a very efficient disease dissemination mechanism.
“First, do no harm” is also Rule One in central banking, a profession some observers would argue hasn’t yet had its Pasteur moment. The consensus in economics (yes, Virginia, there can be consensus in economics) is that essentially on its own the U.S. Federal Reserve prolonged the Great Depression by six or seven years. How it did in the Great Recession is history still being written. The world economy having recovered steadily though not spectacularly almost from the beginning, the judgment this time round is likely to be more favourable.
A consensus in modern central banking is that part of not doing harm is to be as transparent as possible, so as to avoid miscalculation. If central banks are trying to do one thing on the basis of the information they have, but market participants have the impression they’re trying to do something else, there’s the possibility of misunderstanding. Misunderstandings brought the world the war to end all wars, which itself ended 100 years ago this week. It could bring us economic disaster, too.
Central banks haven’t yet got to the stage of being as transparent as Caesar’s wife but in recent years they’ve been trying hard. Just this week the Bank of Canada released three decades’ worth of the internal staff forecasts of inflation, economic growth and other variables that its Governing Council had in hand when making monetary policy decisions over that period. The release doesn’t go all the way to today. From now on the forecasts will be published regularly, though with a five-year lag. But at least observers, including economists who study monetary policy, will get an idea of how in the past the bank has responded to different patterns of expectation. Past reactions are no guarantee of similar reactions in future, as a central bank prospectus might advise. But at least observers will have a sense of what those past reactions were.
Change the rules and you change the world. A world in which information the bank had kept private is now published, albeit with a lag, will be a somewhat different world from the one we have known. To begin with, that the forecast eventually will be published might actually change it. People who know they’re being observed may well behave differently from people whose anonymity is secure. Also, if the bank’s past “reaction functions” become known, and if market participants believe the reaction functions haven’t changed, then knowing exactly what forecasts the bank is presented with will become a matter of real interest. It’s not impossible that this greater interest may cause problems in future.
That would be true, however, only if the bank’s forecasts are different from what the market in general has access to. If they’re basically the same, then everyone’s operating with identical information and there’s nothing to be arbitraged. In that regard, the evaluation study that the bank published simultaneously with the data file of past forecasts is relevant. It’s 60 pages of intimidatingly detailed econometric evaluation of the bank staff’s forecasts.
What’s its result?
The bank forecasts are different from what’s generally available. They’re actually a little better. That’s good news for the bank forecasters. If they weren’t better, the Bank could shut down its forecasting section and just free-ride on private-sector predictors. The Department of Finance did approximately that in the early-1990s, though mainly because suspicion had arisen non-forecasters were manipulating the forecasts for political purposes. In any case, Finance seems not to have missed its forecasters. That the Bank’s forecasters at least slightly outperform the private-sector average doesn’t mean they’re earning their keep, however. We’d have to know what the keep is and whether the foresight advantage is worth it. Perhaps a future bank research study—or maybe the auditor general—will address that.
Another interesting question, one with potential policy implications, is whether the bank’s forecasts are wrong randomly or wrong systematically. They did tend to underestimate inflation in the late-1980s and then overestimate it in the early 1990s. That wasn’t uncommon in the day. The higher-than-expected inflation of the late-1980s was one reason the bank switched to inflation targeting, while the fact that the new regime worked faster than anyone had predicted led to over-prediction of inflation during its first few years. But has there been systematic over- or under-prediction over three decades studied?
No, not for inflation.
The errors appear to be random. But the bank does tend to over-predict real GDP growth one year ahead—by about 0.90 percentage points, to be precise. You would think that a central bank habitually more bullish on the economy than the economy turns out to be might end up running higher interest rates on average than it should. That the bank has been doing a reasonable job of hitting its inflation targets, however, suggests there must be an offsetting systematic error somewhere else in the policy process.
Does the bank really give itself an A- for its forecasting? No. The bank is a sober institution and more or less apolitical and wouldn’t do that kind of thing. But the overall impression the study gives is that the bank hasn’t forecast especially badly, and in today’s world of targeted high grade inflation, “not bad” translates to A-.
Author:
Subscribe to the Fraser Institute
Get the latest news from the Fraser Institute on the latest research studies, news and events.