A few thoughts here, as it bears on my WSJ oped from last week and my last post on EFG and how we do macro.
1. Views of Keynesian economics
Re-reading this paper, you will be struck about how much Lucas and Sargent praise Keynesian models, which you'd think it is their purpose to destroy.
They called the Keynesian revolution a "remarkable intellectual event." they continued
... some of its [Keynesian Revolution] most important features: the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy.
Keynesian theory evolved from a disconnected, qualitative talk about economic activity into a system of equations that can be compared to data in a systematic way and which provides an operational guide in the necessarily quantitative task of formulating monetary and fiscal policy.
neither the success of the Keynesisan Revolution nor its eventual failure can be understood at the purely verbal level at which Keynes himself wrote...
The Keynesian economics they are praising here is not Keynes' book -- one of those big muddy things that people are still writing "what did Keynes really mean" articles and books about nearly a century later -- but the subsequent quantification effort: Hick's creation of the ISLM model, its elaboration into computer models, estimation of those models, and the use of those models to make quantitative forecasts of the effects of policy interventions.
"Quantitative, scientific discipline," and "technical economic expertise" means we analyze policies by real models, not the opinions and judgments of famous economists turned public officials.
Yes, "mathematical control theory." Most readers will be too young to remember, but in the early 1970s academic journals were dynamic optimal control applied to simulations of large-scale Keynesian models.
Their goal was quite conservative: they wanted to preserve this great achievement:
"Quantitative, scientific discipline," and "technical economic expertise" means we analyze policies by real models, not the opinions and judgments of famous economists turned public officials.
Yes, "mathematical control theory." Most readers will be too young to remember, but in the early 1970s academic journals were dynamic optimal control applied to simulations of large-scale Keynesian models.
Their goal was quite conservative: they wanted to preserve this great achievement:
The objectives of equilibrium business cycle theory are taken, without modification, from the goal which motivated the construction of the Keynesian macroeconometric models: to provide a scientifically based means of assessing, quantitatively, the likely effects of alternative economic policies.2. What was their basic criticism of Keynesian economics?
Lucas and Sargent make a two-pronged argument, one about theoretical coherence and the other about the grand econometric failure of Keynesian models.
As I see it, the main characteristic of "equilibrium" models Lucas and Sargent inaugurated is that they put people, time, and economics into macro.
Keynesian models model aggregates. Consumption depends on income. Investment depends on interest rates. Labor supply and demand depend on wages. Money demand depends on income and interest rates. "Consumption" and "investment" and so forth are the fundamental objects to be modeled.
"Equilibrium" models (using Lucas and Sargent's word) model people and technology. People make simultaneous decisions across multiple goods, constrained by budget constraints -- if you consume more and save more, you must work more, or hold less money. Firms make decisions across multiple goods constrained by technology.
Putting people and their simultaneous decisions back to the center of the model generates Lucas and Sargent's main econometric conclusion -- Sims' "incredible" identifying restrictions. When people simultaneously decide consumption, saving, labor supply, then the variables describing each must spill over in to the other. There is no reason for leaving (say) wages out of the consumption equation. But the only thing distinguishing one equation from another is which variables get left out.
People make decisions thinking about the future. I think "static" vs. "intertemporal" are good words to use. That observation goes back to Friedman: consumption depends on permanent income, including expected future income, not today's income. Decisions today are inevitably tied to expectations --rational or not -- about the future.
Notice when you read any textbook that the microeconomic "demand" suddenly becomes the macroeconomic "plan." Why is that? Because demand curves respect budget constraints even at off-equilibrium prices. "Plans" like consumption equals c bar plus alpha times income do not respect any stated budget constraint. You're allowed to say you want to consume and save more than income allows.
Optimization, rational expectations, and flexible prices are the ballyhooed centerpieces of the first round of equilibrium models to follow Lucas and Sargent, such as Kydland and Prescott's famous "time to build" model and the subsequent "real business cycle" models (examples: Bob King and Sergio Rebelo, John Long and Charles Plosser). But I don't think these ingredients are central to the program. The "new-Keynesian" (or, better, DSGE) school put in sticky prices with all the other Lucas-Sargent ingredients, and thus under the "equilibrium" banner. Mike Woodford's "Interest and prices" is aimed proudly at that program. Sticky wages, distortions, and so on are just as often included.
Simon Wren-Lewis questions whether Lucas and Sargent were really focusing on empirical failure or methodological critique. He suggested that accelerationist Phillips curves, though adapted after the failure and thus appearing a bit as epicycles, can account for the data. (Lucas and Sargent: "In economics as in other sciences,...there is always the hope that if a particular specific models fails one can find a more successful model based on roughly the same ideas.")
I think Simon is a bit too blasé about how easy this modification is. Not only did inflation accelerate far faster than Keyensian models of the 1960s predicted, inflation dropped like a stone in 1982, far faster than Keynesians thought possible based on adaptive expectations views. (If someone can find the quote from Samuelson in the early 1980s predicting decades of depression to wring out inflation, please add to the comments.) Proud as some self-identified Keynesians are about how well they think their unwritten, unquantified "model" fits the current recession, deep unemployment with no movement in inflation fits no Phillips curve that was actually written before the crisis. Infinite wage stickiness is an ex-post invention too, and still just a verbal debating point.
But the paper is really clear that empirical failure matters deeply to Lucas and Sargent. They said that
A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate as consistent with a 4 percent annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. [plus ça change...]... policy in this period should, according to all of these models, have produced the lowest average unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale. [My emphasis]Here as elsewhere, they said "econometric," not economic. They meant it.
Everyone was perfectly aware of the lack of "microfoundations" of Keynesian models, and the 50 year fruitless search for such foundations. But so long as the models worked, that had no real impact on their use for the "scientific" and technical policy advice Lucas and Sargent so admired.
And rightly so. Chemistry, until the last few decades, lacked "microfoundations" in quantum mechanics, first because nobody knew quantum mechanics, and later because working out how chemicals react from first principles was too hard. That did not stop chemistry from being a perfectly viable science. Biology, until the last few decades, lacked "microfoundations" in chemistry.
But chemistry and biology worked pretty well. Lucas and Sargent pointed to, and needed to point to, a grand empirical failure.
And that failure had to be accompanied not just by "well, these are reasonable rules, but they're not microfounded." The failure had to be accompanied by showing how the Keyensian model was logically flawed. That failure had to be accompanied by a better theory, which showed why the Keynesian equations were inconsistent with basic economics. That better theory had already predicted Keynesian model's failure -- Friedman 1968, Phelps, and Lucas' prediction that the Phillips curve would shift if exploited. That is a lot more than "methodological purity."
3. So what happened?
As I survey the landscape now, it is interesting how much of the macroeconomics Lucas and Sargent praised has vanished. Quantitative, scientific discipline? Explicit statistical descriptions of economic behavior? Reliance of government officials on technical economic expertise? The use of mathematical control theory to manage an economy? All that has vanished.
The sub-basements of central banks have big DSGE models, or combined models where you can turn Lucas and Sargent on and off. But I think it's fair to say nobody takes the results very seriously. Policy -- our stimulus, for example -- is based on back of the envelope multipliers and the authority and expertise, if you're charitable, or the unvarnished, verbal, opinions if you're not, of administration officials.
There are some large-scale empirical DSGE models left in academia too. But the vast bulk of policy analysis does not use them, as they did, say, the models of 1972. At conferences and in papers, academic work uses small scale toy models and a lot of words. Models do not seem to be cumulative. Each paper adds a little twist ignoring all the previous little twists.
A complete split occurred. "Equilibrium" models, in which I include new-Keynesian DSGE models, took over academia. The policy world stuck with simple ISLM logic -- not "models" in the quantitative scientific tradition Lucas and Sargent praised -- despite Lucas and Sargent's devastating criticism. And, as I remarked in the earlier blog posts, the "purely verbal" or literary style of analysis is becoming more and more common now in academia as well as policy.
I'm not complaining about good vs. bad here. I write simpler and simpler models as I grow older, and spend more time thinking and writing about what those equations mean. It just is a fact about how we do things today and the "scientific," quantitative status of macroeconomics.
Lucas and Sargent's last sentence:
Unless the now evident limits of these models are also frankly acknowledged and radically different new directions taken, the real accomplishments of the Keynesian Revolution will be lost as surely as those we know to be illusory.Academic economics took the first half to hart. Policy economics froze in place. But Lucas and Sargent's "real accomplishments" were lost, or at least consciously abandoned, anyway.
PS: Lucas and Sargent have a delicious quote about Keynesian's loss of faith in their own model, as applicable now as then:
The current wave of protectionist sentiment directed at "saving jobs" would have been answered ten years ago with the Keynesian counterargument that fiscal policy can achieve the same end but more efficiently. Today, of course, no one would take this response seriously, so it is not offered. Indeed, economists who ten years ago championed Keynesian fiscal policy as an alternative to inefficient direct controls increasingly favor such controls as supplements to Keynesian policy.
This paper went through several versions, this is closest to what they really thought: http://youtu.be/_7zvFkM0NSQ
ReplyDelete"Quantitative, scientific discipline," and "technical economic expertise" means we analyze policies by real models, not the opinions and judgments of famous economists turned public officials.
ReplyDeleteProfessor Cochrane when will it be your turn to work as a government official? Surely it should be mandatory for macro-economists to practice what they preach just as doctors who research do a stint in hospitals. This would be your opportunity to be involved in the fixing of sick economies. Surely this is the ultimate aim of an economist?
Or should they contain themselves in ivory towers creating artificial construct and abstraction which when it comes to the crunch (ie when someone has to make a decision about what to actually do with an economy - especially in a crisis) cannot be usefully used?
(I am not sure what you mean by "real" models. Is that an electronic device or something that explains the real world?)
A stint for all macro-economists in either a central bank or treasury (and not just in their back office "research" departments) would be a great for the country and for the discipline.
"A stint for all macro-economists in either a central bank or treasury (and not just in their back office "research" departments) would be a great for the country and for the discipline."
DeleteYou also could say: a stint for all practitioners (and people, in general) in back office "research" departments would be a great for the country.
If Keynes was also a practitioner, and not just a theorist, it makes him more credible.
DeleteIs a representative agent model a "real model"?
Delete"But more generally economists seem to like their models and then give after-the-fact justification"
DeleteHopefully this is what we are moving away from.
http://andrewgelman.com/2014/07/18/differences-econometrics-statistics-varying-treatment-effects-utilities-economists-seem-like-models-fixed-stone-statisticians-tend-comfortable-w/
The basic problems of mainstream macro are the assumptions about the knowledge of the future of economic agents and the low degree of heterogeneity of economic agents.
ReplyDeleteI remember trying to learn the Calculus of Variations for Macro in grad school. If I was able to use that today, I would work out the optimal contracts between banks, depositors, and borrowers, as a function of macroeconomic variables, and then try to estimate the welfare gains over the present system. But I am limited to the humble role of a purely verbal analyst on social media, where there is an active conversation on what Minsky really meant.
ReplyDeleteAnwer,
DeleteTo do it correctly, I think you would need tri-party contracts or multiple two party contracts (bank, depositor/borrower, legal authority/government).
Doing it as a multiple two party contract would look like following:
1. Government sells collateral to potential borrower. Government sets potential rate of return on collateral, borrower achieves realized rate of return through tax burden reduction.
2. Bank lends money at interest to borrower. Bank retains collateral offered by borrower that is matched to the duration of loan and the potential value of the loan.
3. Over time as loan is paid back, collateral is returned to borrower.
4. If borrower defaults, collateral is retained by bank and can then be sold.
5. If bank goes under, collateral and loan held by bank are sold to another bank (loan and underlying collateral cannot be separated by disabled bank and sold separately).
In step 1 a two party contract is created between potential borrower and government. In step 2 a two party contract is created between bank and potential borrower. In step 4, a two party contract between potential borrower and government becomes a two party contract between bank and government.
Anwer,
DeleteMore on tri-party contracts:
http://libertystreeteconomics.newyorkfed.org/2011/04/everything-you-wanted-to-know-about-the-tri-party-repo-market-but-didnt-know-to-ask.html#.U8l0f01OW70
"A repo is a financial transaction in which one party sells an asset to another party with a promise to repurchase the asset at a pre-specified later date. A repo resembles a collateralized loan but its treatment under bankruptcy laws is more beneficial to cash investors: in the event of bankruptcy, repo investors can typically sell their collateral, rather than be subject to an automatic stay, as would be the case for a collateralized loan."
With government supplied collateral, you achieve the same type of tri-party arrangement that is found in the repo market.
I'm thinking of the need to evaluate different possible ways of organizing transactions, using model-based welfare analysis. For example: if we require people to hold risk-free collateral that pays r, while the economy is growing at g, how do we measure the added transaction costs of sub-optimal collateral arrangements? If we had quantitative estimates of the benefit from reformed financial architecture, that might give us a greater sense of urgency. If I was skilled at DSGE modelling, I would do the calculations myself, and I would also look at the effects of proposed reforms on macroeconomic fluctuations. But I'm limited to speculation, and I think that is the point made in this blog post.
DeleteAnwer,
DeleteIf people are holding risk free collateral that pays r, why would they ever put that collateral up against a loan?
With risk bearing collateral, the potential rate of return exceeds the risk free rate, and so there is an incentive to borrow against that collateral to realize the potential return.
Because the loan would allow them to buy assets with higher yield. Isn't that how repo is used?
DeleteAnwer,
DeleteYes that is how all lending (collateralized or not) is typically used - borrow at an interest rate and hope that the return on the borrowed and invested funds exceeds the cost of borrowing.
Obviously, the bank / investor doing the lending would prefer collateral of the risk free kind, but is risk free collateral an optimal solution for all parties involved (borrower, lender, government)?
I have a feeling that the demand for risk-free collateral is lowering the real interest rate, and many people are concerned about the effects of such low equilibrium rates. But I don't have a formal model that proves my point. All I can offer is verbal analysis, which cannot determine if the effect is quantitatively significant.
DeleteEarnest with a few jabs.
ReplyDeleteIt does strike me that while the course of thought is logical, it is strangely misapplied to a discipline where analysis requires understanding contradictory truths, indeed where nothing useful is true unless it is balanced by an opposite truth.
What does it mean to call economics 'Political Economy' after all.
I read the following and wonder how it's possible that in another place the writer has a coherent sentence about chemistry despite an utter failure to understand what the word "science" means:
ReplyDelete"Quantitative, scientific discipline," and "technical economic expertise" means we analyze policies by real models, not the opinions and judgments of famous economists turned public officials.
Prof Alan Blinder disagrees, and hopes that the Taylor rule isn't imposed upon central bankers. He prefers randomness
ReplyDelete"Randomness" isn't the only alternative. There's a distinction between policy rules and instrument rules:
Deletehttp://worthwhile.typepad.com/worthwhile_canadian_initi/2014/07/taylor-rules-again.html
John, I like your discussion of biology not needing chemistry (at least at first), etc. Along those lines a new approach to macro treats it as emergent from a large ensemble of market participants. The approach is derived from information theory, and I think the author has had considerable success in the year since he's invented it. It allows a limited, but long term view of macro, and is based solidly on theory (not just ad-hoc fits to empirical data), yet it models directly the macro variables, much as P, V and T are modeled directly in the ideal gas law w/o needing to model every individual molecule, or even know what a molecule is (yet 19th century statistical mechanics provides a solid theoretical basis for this micro-state ignoring approach). Sure it's a risk, but you might check it out. He could probably use an established co-author. The general information transfer model it's based on has only been around since 2009, and the author is the first person to develop specific economic models based on it. This explains the philosophy behind the approach in English:
ReplyDeletehttp://informationtransfereconomics.blogspot.com/2014/07/information-transfer-is-state-of-mind.html
And here's a good example of the resulting models being applied to predict inflation:
http://informationtransfereconomics.blogspot.com/2014/07/us-inflation-predictions.html
the models uses very few parameters, and the author is not interested in adding epi-cycles: so if ultimately it fails, it fails. But so far it looks pretty good! (And it allows a shortcut around any micro-foundations, or models of agents, expectations, indifference curves etc.).
Good God you'll peddle this nonsense anywhere.
DeleteA clearly written and very interesting post - and somewhat surprising. It would be interesting to see if others agree with you on this being the trend because I have heard many complain that already excessive formalisation and abstraction has continued apace in macroeconomics, even in the wake of the crisis which many thought should have led the subject back to square one.
ReplyDeletePhillips curve shifts (not) under some specific conditions. It all depends upon the assumptions you introduce in your model. It is foolish now in 2014 to argue for or against the shift of the PC. And it is even more foolish to apply a model to today's real world without checking if the model's assumptions are true here today.
ReplyDeleteGood article, which points out how distilling an academic paper often leads to a loss of nuance. For completeness, some discussion of the drawbacks of the DSGE models may have been relevant (intertemporal consumption-based models need very pointed parameter estimates/ functional forms for the results to be intuitive, the equilibria move around meaningfully for small data/ model changes etc.). Also, I wouldn't go so far as to say that an IS-LM based framework, if followed within suitable boundaries (and augmented at more extreme values) is not quantitative or rule-based. IMO, that's not a terrible way to run policy, and I would still consider that a model.
ReplyDeleteI liked this paragraph:
ReplyDelete"Chemistry, until the last few decades, lacked "microfoundations" in quantum mechanics, first because nobody knew quantum mechanics, and later because working out how chemicals react from first principles was too hard. That did not stop chemistry from being a perfectly viable science. Biology, until the last few decades, lacked "microfoundations" in chemistry.
But chemistry and biology worked pretty well. Lucas and Sargent pointed to, and needed to point to, a grand empirical failure."
The comparison with chemistry and biology is fine in theory, but if you were a chemistry or biology practitioner you wold be doing "your stuff" in a lab, or within a private company, etc. Whereas today, focus on empyrical methods without foundations will lead to policies that impact everybody else, particularly if we are talking about monetary policy ...
Lucas & Sargent state that: "he [Keynes] thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on adherence to the two postulates (a) that markets be assumed to clear, and (b) that agents be assumed to act in their own self-interest."
ReplyDeleteLucas & Sargent state that: "he [Keynes] thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on adherence to the two postulates (a) that markets be assumed to clear, and (b) that agents be assumed to act in their own self-interest."
Re (a): Yes. Keynes argues that the labour mkt does not clear in the way prescribed by classical economics (and this is surely correct - the supply of labour often rises when wages fall); but (b) is nowhere rejected by Keynes. In fact, the second classical postulate that Keynes rejected was the absence of a coherent role for expectations. Non-rational (not irrational) expectations under conditions of uncertainty play a central role in Keynes. To that extent, Shiller has a more coherent lineage from Keynes than the nominal-rigidities crowd.
I would argue further that this model of the business cycle – where labour markets do not clear like other markets – and where perceptions of risks (beliefs/risk premia/expectations) are cyclical, has been empirically strengthened by labor market research and finance literature; and adequately explains business cycles.