Wednesday, September 22, 2021

Interest rate survey

Torsten Slok passed on a lovely graph, created from the Philadelphia Fed survey of professional forecasters: 

It's not just the Fed, whose own forecasts and dot plots have the same characteristics. 

Some potential lessons

1) Just you wait. There is the story of the hypochondriac, who when he died at 92 had inscribed on his tombstone "See, I told you I was sick." More serious stories have been told of the 1980s high interest rates, worried for a decade about inflation that could have come but never did. Or the famous "Peso problem," persistently low forward rates that eventually proved to be right. 

2) A lot of fun is made in survey research about the "irrational" expectations revealed by surveys. Whether "professionals" are involved is often a used to select between "rational" and "noise" investors in asset pricing studies. Hello, the professionals are just as behavioral as the rest of us. As are the Fed economists whose forecasts look the same. The argument from irrational-looking surveys to let the "experts" run and nudge things never did hold water. 

3) Just what do survey forecasts mean? How many of these Wall Street economists, or their trading desks are heavily short 10 year bonds? How many of them lost money on that trade for 20 years running? It's a good bet the same economists work for firms that, to the contrary, have been riding this... well, call it a trend, call it a bubble, call it a golden two decades for long-term bonds. What is the risk premium story for believing long term bonds are about to take a bath, but buying a lot of them anyway? 

4) Just what do survey forecasts mean? We ask people "what do you expect," and scratch our heads that they do not reply with numbers that make sense as true-measure conditional means. The event of a sharp raise in rates might come with substantially higher marginal utility, i.e. a very bad event. Reporting risk-neutral measure, probability times marginal utility might make sense for many reasons. Reporting a 40% quantile, shaded to bad news, makes a lot of sense for many reasons. Clients who make money don't complain. Clients who lose money do.  

5) Just what do survey forecasts mean?  For most surveys, the interesting thing is not the average but the astounding variation around that average. In theory, asset trading should lead to common expectations. In fact it does not. I would love to see the variation around this mean forecast. 

Confessions. I've been ... well, not forecasting, but doom and glooming about a sharp interest rate rise for just as long. And, I have to report, the graph has not yet changed my mind. To some extent, one faces the problem of the value investor, who every time the stock goes down has to say, "now it's an even better deal!" I guess I have company.  See, I told you I was sick? 


  1. Philly Fed's SPF has the standard deviation around these estimates. So it would be possible to put error bars on it.

  2. 》What is the risk premium story for believing long term bonds are about to take a bath, but buying a lot of them anyway?

    》To short the 10-year notes, investors need to borrow them from entities, such as money market funds, in the repo market, sell them, and buy them back later.

    》hedge funds and others want to borrow and sell the bonds now in a wager that they can buy them back later for less, as rising U.S. rates push bond prices lower.

  3. It is a puzzle. Many highly intelligent, deeply experienced macroeconomists have been wrong for 40 years in a row. Paul Volker and Martin Feldstein come to mind.

    Consumer surveys in Japan also show that populations can expect higher inflation for decades in a row, but in fact experience deflation.

    I do not know what all this means.

    1. Why are price controls in Japan difficult for you to comprehend?

    2. Why are price controls in Japan difficult for you to comprehend?

    3. It basically means that we have no idea of the real reasons behind the interest rates long term decline.

      Making predictions without a good model behind has no value whatsoever. It is akin to predicting the weather in Athens in Pericles's times or letting the 18th century doctors predict the evolution of George Washington health after the bleedings they performed on the President.
      Even Mankiw, who probably knows a thing or two about Macro, is puzzled by this. Offering not least than 7 "alternative" explanations is very close to having no clue.

      Maybe we have just to accept that economics now is in the same situation that weather prediction was 25 centuries ago, and experts in the field predictions are as valid as Washington's physicians 200 years ago (they were too very respected experts in their field).

  4. Rather odd. The (relatively short-term) forecasts for higher rates in the next few years were (+/-) correct in 2008, 2012, 2016, & 2020 -- perhaps connected in some way to US electoral cycles.

    If we adjusted the interest rates for inflation (a very contentious adjustment), the chart would presumably show that real interest rates were going negative, and economists were predicting a slow return to positive real interest rates. Negative interest rates mean we are not in Kansas any more -- so it is understandable why economists would think this travesty has to self-correct. It seems we are back to the truism that markets can stay irrational longer than any of us can stay solvent.

  5. I guess this shows why the "match-model-to-survey-expectations" method never really caught on. Still, after reading your post I wonder if anyone tried to match risk-neutral instead of actual expectations. Sounds interesting...

    PS: Yields were higher in the 80s and 90s, the 10y is no exception. Maybe "professional forecasters" were just putting together mean reversion and a higher unconditional average.

  6. From Torsten Slok's chart, it will be seen that 14 out of 38 forecasts correctly predicted the 10-yr rate correctly within a 1-2 year window. 7 out of 19 hit the mark ( batting average > .350).

    Tom Stark's (FRB-Philadelphia) paper titled "Realistic evaluation of real-time forecasts in the Survey of Professional Forecaster's", May 28, 2010, indicates that beyond the first calendar quarter, are no better than a forecast that assumes that the 10-year rate follows a random walk, i.e., E{r(t + n) - r(t-1)| I(t)} = 0, for n > 1. A batting average of .350 may or may not be consistent with this expectation--in other words, the forecast is "correct" simply because the actual 10-year rate happened to drift across the forecast at some time t + s, rather than the forecaster (or the aggregate forecaster, "SPF") having been prescient at time t (the date of the forecast).

    Only the short-term forecast of the 10-year rate shows improvement over the naïve, or random walk, forecast.

    A tip of the hat to commentator "John" for pointing towards the Philadelphia FRB's "Survey of Professional Forecasters", and to John Stark for his 2010 paper that provided the analytical results that informed this comment. -- Sean-shúil iolair.


Comments are welcome. Keep it short, polite, and on topic.

Thanks to a few abusers I am now moderating comments. I welcome thoughtful disagreement. I will block comments with insulting or abusive language. I'm also blocking totally inane comments. Try to make some sense. I am much more likely to allow critical comments if you have the honesty and courage to use your real name.