By the standards of mainstream media coverage of technical economics, Peter Coy's coverage of HANK (Heterogeneous Agent New Keynesian) models in the New York Times was actually pretty good.
1) Representative agents and distributions.
Yes, it starts with the usual misunderstanding about "representative agents," that models assume we are all the same. Some of this is the standard journalist's response to all economic models: we have simplified the assumptions, we need more general assumptions. They don't understand that the genius of economic theory lies precisely in finding simplified but tractable assumptions that tell the main story. Progress never comes from putting more ingredients and stirring the pot to see what comes out. (I mean you, third year graduate students looking for a thesis topic.)
But in this case many economists are also confused on this issue. I've been to quite a few HANK seminars in which prominent academics waste 10 minutes or so dumping on the "assumption that everyone is identical."
There is a beautiful old theorem, called the "social welfare function." (I learned this in graduate school in fall 1979, from Hal Varian's excellent textbook.) People can have almost arbitrarily different preferences (utility functions), incomes and shocks, companies can have almost arbitrarily different characteristics (production functions), yet the aggregate economy behaves as if there is a single representative consumer and representative firm. The equilibrium path of aggregate consumption, output, investment, employment, and the prices and interest rates of that equilibrium are the same as those of an economy where everyone and every firm is the same, with a "representative agent" consumption function and "representative firm" production function. Moreover, the representative agent utility function and representative firm production function need not look anything like those of any particular individual person and firm. If I have power utility and you have quadratic utility, the economy behaves as if there is a single consumer with something in between.
Defining the job of macroeconomics to understand the movement over time of aggregates -- how do GDP, consumption, investment, employment, price level, interest rates, stock prices etc. move over time, and how do policies affect those movements -- macroeconomics can ignore microeconomics. (We'll get back to that definition in a moment.)
Now uniting macro and micro is important. Macro estimation being what it is, it would be awfully nice to use micro evidence. The program kicked off by Kydland and Prescott to "calibrate" macro models from micro evidence would be very useful. Kydland and Prescott may have had a bit of grass-is-greener optimism about just how much precise evidence macroeconomists have on firms and people, but it's a good idea. Adding up micro evidence to macro is hard, however. Here "aggregation theory," often confused with the "social welfare function" theorem comes up, more as a nightmare from graduate school. The conditions under which the representative agent preferences look like individual people are much more restricted.
Like all good theorems, this one rests on assumptions, and the assumptions are false. The crucial assumption is complete markets, and in particular complete risk sharing: There is an insurance market in which you can be compensated for every risk, in particular losing your job.
A generalized form still works, however. There is still a representative agent, but it cares about distributions. The representative agent utility function depends on aggregate consumption, aggregate labor supply but now also statistics about the distribution of consumption across people. In asset pricing, the Constantinides-Duffie model is a great example: the cross-sectional variance of consumption becomes a crucial state variable for the value of the stock market, not just aggregate consumption.
All economic theorems are false of course, in that the assumptions are not literally true. The question is, how false? Conventional macroeconomics comes down to a description of how aggregates evolve over time, based on past aggregates:
[aggregate income, consumption, employment, inflation... next year ] = function of [aggregate income, consumption, employment, inflation, policy variables... this year ] + unforecastable shocks.
That's it. That's what macroeconomics is. Theory, estimation and calibration to figure out the function. [Update. I added policy variables, e.g. interest rates, to the function. And, the point of macro is to figure out how policies affect the economy, and furthermore with an objective in hand to derive optimal policies. Thanks François Velde for pointing out the omissions in comments.]
If HANK is useful to macroeconomics, then, it must be that adding distributional statistics helps to describe aggregate dynamics. Reality must be
[aggregate income, consumption, employment, inflation... next year ] = function of [aggregate income, consumption, employment, inflation, distribution of consumption, employment, etc., policy variables,... this year ] + unforecastable shocks.
So here is a central question I have for HANK modelers: Is that true? Do statistics on the distribution across people of economic variables really help us to forecast or understand aggregate dynamics? So far, my impression is, not much. The social welfare function theorem can be wrong in its assumptions, yet still a pretty good approximation. And "heterogeneity" has been around macro for a long time, but never has seemed to matter much in the end. (The investment literature of the early 1990s is a great example.) But I would be happy to be proved wrong. This post is as much a suggestion for HANK modelers as a critique.
Another possibility: Maybe HANK is about aggregation after all. Can we actually use micro evidence, and add it up constructively, to learn what the representative agent - social welfare function is? Even before HANK, there were good examples. For example, the literature on labor supply: Macro models want people to work more in response to temporarily higher wages. Most individual people work 8 hours a day or zero, so micro evidence finds a small response. But a small number of people move from non-work to work as wages rise. So the representative agent can have a much larger elasticity than individual people. And, you have to understand labor market structure, and the distribution of who is available to work to add up from micro to macro evidence. Here, I would like to know the basic functional form -- how much does the SWF care about today vs. tomorrow, risk, work vs leisure, as well as any distributional effect?
2) Income effects
Coy also goes on with the usual New York Times schtick about how dumb and irrational all the little hoi polloi are. (Of course we of the elite and the federal government handing out nudges would never be behavioral.) But you don't need HANK to assume that the representative investor is dumb either. He goes on to describe pretty well where the current literature is.
Behind this is, however, one of the major features of HANK models so far. One of its most important uses has been to put current income in the IS equation.
(Economists talk amongst yourselves for a bit while I explain this to regular people. So far, the central description of demand in new Keynesian models is based on "intertemporal substitution:" When the real interest rate is higher, you consume a bit less today, save a bit more, so that you can consume a lot more tomorrow. That is the crucial mechanism by which higher real interest rates (say, induced by the Fed) lower demand today. Old Keynesian models didn't have people in them at all, but hypothesized that consumption simply follows income. That adds a more powerful mechanism, the "multiplier:" an initial income drop lowers consumption, which lowers income and around we go. )
HANK models often add some "hand to mouth" consumers. Some people think about today vs. the future, but others just eat what income they make today. You can get this out of "rational, liquidity constrained" people, but that's typically not enough. To get significant effects, you need people who just behave that way. So, there is this little bit of behaviorism in many HANK models. But it's a little spice in the otherwise Lucas soup.
In equations, the standard model says
consumption today = expected consumption tomorrow - (number) x real interest rate
After an immense amount of algebra and computer time, HANK models allow you to write
consumption today = (number) x income today + (number) x expected consumption tomorrow - (number) x real interest rate
New Keynesian models were invented on the hope they would turn out to be holy water sprinkled on old-Keynesian thinking, for example justifying big spending multipliers and strong monetary policy. They turned out to be nothing at the sort once you read the equations. A movement is underway to modify (torture?) new-Keynesian models to look like old-Keynesian models, to bring macro back to roughly the 1976 edition of Dornbush and Fisher's textbook. Complex expectation formation theories and this aspect of HANK can be digested that way.
So here is my second question for HANK modelers: Is this it? When we boil it all down to the linearized equations of the model you take to data, to explain aggregates and monetary and fiscal policy, is there a big bottom line beyond an excuse to revive bits of the Keynesian consumption function? That too is an honest question, and perhaps a suggestion--show us the textbook back of the envelope bottom line model. (It would be awfully nice if distributions mattered here too, theoretically, empirically, and quantitatively.)
3) Micro implications of macro
Maybe you disagreed a few paragraphs ago with my definition of macroeconomics, as only concerned with the movement of aggregates over time. Talking with some of my HANK colleagues, a different purpose is at work -- figuring out the effects of macroeconomics on different people. Recessions fall harder on those who lose jobs, and certain income and other groups; harder on some industries and areas than others. Here HANK dovetails with concerns over income diversity and "equity."
That's a perfectly good reason to study it, but let's then be clear. If that's the case, HANK really doesn't change our understanding of how policies and events move aggregates around, it is really just about understanding how those aggregates affect different people differently.
That may change calculations of optimal monetary policy. If the objective function cares negatively about income diversity, then adding HANK may produce a model that makes no difference at all for the effect of monetary policy on aggregates, but gives a greater weight to employment vs. inflation. ("May!" Inflation also falls harder on people experiencing low incomes, so concerns for equity could go the other way too. Thanks to a correspondent for pointing that out.) Many models have observationally equivalent predictions for aggregates but different welfare implications, and the same model can have different welfare implications if you put in different preferences for distributions across people. But surely HANK has more to offer than a long-winded excuse for dovishness towards tolerating inflation in place of unemployment.
Also, in the big picture this seems like a classic answer in search of a question. If you care about the less fortunate, you start with the big issues: crime, awful schools, family breakdown, opportunity. The additional benefit for the less fortunate from the level of the overnight federal funds rate might be fun to isolate in a model, but we are really staring at a caterpillar on a leaf of a tree and missing the forest of economic misfortune.
4) Last thoughts
I hesitate to write, as I am a consumer not a producer of HANK research, and thus will probably get things wrong or show my limited knowledge of the literature. Please fill the comments with corrections, amplifications, pointers to good papers, etc.
There is a tendency in economics to pursue a new technical possibility without really knowing where it's going or why. That's not unhealthy; figure out what you can do first, and what to do later. The why always does come later. This was true of rational expectations, real business cycles, new-Keynesian models and more. Now that HANK is pretty well developed and is coming out in public, with admiring New York Times articles, it is worth assessing the why, the bottom line, what it does.
I'm also hesitant to write and especially too critically. I vividly recall being in grad school, and some speaker (I mercifully forgot who) went on a tirade about all these young whippersnappers using too much math and not enough intuition and just being in love with building models. I vowed if I ever thought that I would retire. What do we say to the angel of old age? Not today. Bring it on, and let's all figure out what it means.
Update:
Alessandro Davis comments below, reminding me of their recent QJE paper "
Imperfect Risk Sharing and the Business Cycle." This paper evaluates directly the question, how much does heterogeneity matter for aggregate dynamics? The headline answer is "not much, though maybe more at the zero bound."
deviations from perfect risk sharing implied by this class of models account for only 7% of output volatility on average but can have sizable output effects when nominal interest rates reach their lower bound.
Now, 7% might actually be a lot. A little secret of contemporary macro models is that none of them explain a lot of output volatility. In my above characterization aggregates next year = function of aggregates today + shocks, the shocks are big and account for most variation in aggregates. Most inflation comes from inflation shocks, not movements in other variables like employment, especially as fed through a model. This isn't necessarily a failing of models. New Keynesian models are designed to understand how monetary policy affects output, not to explain why output varies. Milton Friedman thought that most business cycles were due to monetary policy mistakes, so understanding the former is the same as the latter, but he seems to have been wrong about that, at least since 1982. Or maybe not.
The paper's computation takes heterogene in the data, and asks how much does that affect the new-Keynesian model's predictions for output, employment, etc. I have in mind a slightly different question: Even without much theory, how much can data on heterogeneity actually improve forecasts of output, employment, etc. Do distributional variables improve VAR forecasts? Let me know if you have an answer to that one.
The paper has a crystal clear summary of the representative agent theorem, and its important extension. They show how distributional variables enter in to a representative agent representation as simple "wedges." Using a representative agent does not mean you assume all people are identical!
There is also a great literature review on the general understanding that distributional variables don't matter much for aggregates, starting with
Krussell and Smith. A parallel literature in finance qualitatively examined the beautiful Constantinides-Duffie mechanism, finding that uninsured idiosyncratic risk isn't large enough or variable enough to account for asset pricing puzzles. So far -- that's all from the 1990s and a lot of the point of HANK is to reverse that impression.
Update
See Matthew Rognlie’s superb answer below. I ask a lot of questions but seldom get such clear and detailed answers! Thanks for the short course on Hank model big picture!
Update 2
Ben Moll writes
Hi John, thanks a lot for the very thoughtful post. Lots of great food for thought. In case you hadn't seen it, Tom Sargent posted a new paper a few days ago that has a really great discussion of the main takeaways from HANK. See in particular sections 5 and 7. For example, see the point that HANK "challenges the neoclassical synthesis and a widely-believed prescription for separating macro policy design from policies to redistribute income and wealth." But plenty of other great points there too. Finally, yes, Matt Rognlie's response is really fantastic.
Fun food for thought! I offer one minor comment, with the usual caveat that I'm a nobody, merely an interested consumer of (and not producer of) HANK DSGE work.
ReplyDeleteYou say, in a few different ways, "Is there a big bottom line [to HANK] beyond an excuse to revive bits of the Keynesian consumption function?" But perhaps they're not just - and I apologies in advance for this - "raising Keynes" (or maybe HANKenstein's Monster?). HANK and the broader agenda on heterogeneous agents in macro post-Aiyagari is a natural direction for theory to pursue in light of the advancements in data and empirical work over the last 30 years.
Two prominent examples come to mind. On expectations, the Bordalo, Gennaioli, Schleifer work on diagnostic expectations and the Coibion and Gorodnichenko work on consumer expectations pose a lot of big questions (though, maybe fewer answers). Second, for HANK, I find the empirical work on household MPCs, wealth, and the composition of household balance sheets (e.g. Broda and Parker - or, I guess, most of Parker's work) very compelling, and difficult to reconcile with standard "RANK" or "TANK" benchmarks. A good motivation for that early HANK work is simply, "How do we make sense of this evidence?" And because I can't resist a third example - Ganong and Noel's paper on consumption behavior around UI expiration is an absolutely wonderfully-written piece of research demonstrating how careful empirics can inform deficiencies in theory.
All this is to say: isn't it more charitable to assume that HANK is the evolution of theory in response to empirics? Macroeconomists are storytellers, but a defining feature of our stories is that they generate testable hypotheses. As we accumulate new evidence that is hard to reconcile with our workhorse models - and obtain new data that allows us to test hypotheses from richer models - our models move in those directions.
Re charitable interpretation; yes, it does seem as though John has ascribed dubious motivations to those working on HANK models. He's just as vulnerable to the "us vs them" mentality as anyone else.
DeleteI really object to this comment. If you can point to any motivation accusations, post them and I'll remove them immediately. I think of attacking motivations, especially without evidence, as one of the most supremely unethical rhetorical devices around. I try to avoid even the unintentional appearance of doing so.
DeleteI should add -- many HANK authors are valued colleagues and good friends, whom I admire greatly. This is interesting and hard work. If that wasn't obvious, now it is explicit.
DeleteLikewise, allow me to clarify that I very strongly disagree with Egbert's comment, and do not intend for my top level comment to give off the impression that John is making any underhanded accusations. My original comment merely suggests one possible answer to a question that John posed: is there a bottom line to HANK that is deeper than reviving Keynesian consumption functions? This is a completely fair and interesting question. I do not see any interpretation of John's post that suggests ulterior motives for the HANK crowd, and he does a great job at fostering an open discussion on this blog.
DeleteThank you for your interpretation of the Coy article. Being a bit bewildered after reading it, your writing still has me bemused but feeling better. In truth, I just want the critical path through the problem for me to succeed when economic modeling is done.
ReplyDeleteA few thoughts:
ReplyDeletei) HANK has different evolution of the Macro aggregates in response to the same shock as RANK; e.g., Kaplan & Violante (2018). The actual evolution of the aggregates in a HANK model does necessarily depend on the distributional issues (as you express it; in my mind it is more the distribution of different reactions, rather than the distribution of circumstances per se, which matters). This is a comment about the models, not necessarily about reality.
ii) The channels by which policy works are different in HANK: in RANK monetary policy is all about intertemporal substitution, but in HANK there are other channels (e.g., some of population becomes unemployed and reacts to this)
For both of these first two points though, whether or not the HANK versions are better than the RANK versions is an empirical question.
Perhaps the biggest issue, to my mind, is that all the evidence is in the microdata and so we just have to go there. The Phillips Curve provides a nice example: Mavroeidis, Plagborg-Moller & Stock (2014) show that estimates that only use Macro data have a weak instruments issue and cannot pin down the slope. The way out is to use the micro-data, as in, e.g., Hazzell, Herreno, Nakamura & Steinsson (2022). As soon as we start doing this it is only natural that we want models which are capable of matching the things we see in the micro-data so that we can be sure the model is doing what we see in the data in terms of plausible mechanisms.
[Worth mentioning that Hazzell, Herreno, Nakamura & Steinsson (2022) would count as representative agent. They have multiple regions, rather than different households. The source of the heterogeneity is not important, what is important is the movement towards disaggregated data in search of the empirical evidence that contains enough detail for us to tell Macro stories/models apart.]
[Kaplan & Violante (2018) does not show that the dynamics of aggregates in HANK cannot be done with RANK, just that if you use the "same" models then they give different impulse responses to the "same" shocks.]
“ That's what macroeconomics is. Theory, estimation and calibration to figure out the function.” So we are exonerated from policy evaluation and welfare considerations? What a relief.
ReplyDeleteInteresting and thought-provoking post! David Berger, Luigi Bocola and I do something along the lines of what you suggest in part 1 of the post in this paper: https://academic.oup.com/qje/article-abstract/138/3/1765/7080182
ReplyDeleteOur goal is to quantify how much imperfect risk sharing contributes to aggregate fluctuations. We show (somewhat trivially) that the contribution of imperfect risk sharing to aggregate fluctuations is fully summarized by two statistics of the equilibrium cross-sectional distribution of households’ consumption shares and relative wages. We call these two statistics preference wedges, because they can be interpreted as a time-varying discount factor and disutility of labor in an otherwise standard representative agent economy. Thus, the contribution of the preference wedges in a RA economy is the contribution of imperfect risk-sharing to aggregate dynamics. (In a sense, these two wedges are a measure of how much standard aggregation fails.)
We then measure the preference wedges using household-level data and feed them into a standard NK model. We find that deviations from perfect risk sharing account for only 7% of output volatility, but they can have much larger effects when nominal interest rates reach their lower bound. For example, the preference wedges account for about one-fourth of the output drop observed during the Great Recession from 2007 to 2009.
Penrose once quipped he believes there is an intermediary realm between the quantum and classical rules of physics. Similarly, I've wondered if there is an intermediary realm between the micro and macro worlds.
ReplyDeleteAs my musings vary (ha ha pun intended), my mind has wandered to a new possibility: that there is no intermediary realm and instead the micro and macro worlds, through their feedback loops, reinforce one another. I suppose this makes sense on paper. We can take micro models, arbitrarily glob agents together and we have a market; can do this ad infinitum. But instead the trick is to glean insights from micro and overlay onto the macro and vice versa: in this sense the economy functions as a series of repeating structures where the only major difference is scale. And what does this point to?: a small realm of chaos theory as it relates to repeating structures. (Dr. Cochrane once used the word "fractal" in one of his asset pricing classes).
Maybe there is new earth to till in economics!
John,
ReplyDeleteFrom above:
"Now uniting macro and micro is important. Macro estimation being what it is, it would be awfully nice to use micro evidence. The program kicked off by Kydland and Prescott to calibrate macro models from micro evidence would be very useful. Kydland and Prescott may have had a bit of grass-is-greener optimism about just how much precise evidence macroeconomists have on firms and people, but it's a good idea. Adding up micro evidence to macro is hard, however. Here aggregation theory, often confused with the social welfare function theorem comes up, more as a nightmare from graduate school. The conditions under which the representative agent preferences look like individual people are much more restricted."
"Like all good theorems, this one rests on assumptions, and the assumptions are false. The crucial assumption is complete markets..."
https://en.wikipedia.org/wiki/Complete_market
"In economics, a complete market (aka Arrow-Debreu market or complete system of markets) is a market with two conditions: Negligible transaction costs and therefore also perfect information. Every asset in every possible state of the world has a price."
Even with complete markets (as defined above), you don't get viable macro-economic models with micro foundations without first identifying how economic decisions are made (see political economy).
With two party economic decision making relying on both parties agreeing to terms, 75% of all potential transactions fail.
Party #1 - No, Party #2 - No : Transaction fails
Party #1 - No, Party #2 - Yes : Transaction fails
Party #1 - Yes, Party #2 - No : Transaction fails
Party #1 - Yes, Party #2 - Yes : Transaction succeeds
With three party economic decisions relying on a majority agreeing to terms, only 50% of all potential transactions fail.
Party #1 - No, Party #2 - No, Party #3 - No : Transaction fails
Party #1 - No, Party #2 - No, Party #3 - Yes : Transaction fails
Party #1 - No, Party #2 - Yes, Party #3 - No : Transaction fails
Party #1 - Yes, Party #2 - No, Party #3 - No : Transaction fails
Party #1 - No, Party #2 - Yes, Party #3 - Yes : Transaction Succeeds
Party #1 - Yes, Party #2 - No, Party #3 - Yes : Transaction Succeeds
Party #1 - Yes, Party #2 - Yes, Party #3 - No : Transaction Succeeds
Party #1 - Yes, Party #2 - Yes, Party #3 - Yes : Transaction Succeeds
So if you look at a bunch of micro-economic data (representing previous economic decisions) and try to extrapolate or confirm a macro-economic model from that data, you will struggle without understanding how those economic decisions were made.
"If I have power utility and you have quadratic utility, the economy behaves as if there is a single consumer with something in between."
You have power utility and your wife has quadratic utility - economic decisions in your family are two party decisions resulting in one economic state.
You have power utility, your wife has quadratic utility, and your child has another type of utility - economic decisions in your family are three party decisions resulting in a different economic state.
Thanks for the comments about HANK, and sorry I’m late to arrive here!
ReplyDeleteYou cover a lot of ground, but let me focus on your second comment: “is there a big bottom line beyond an excuse to revive bits of the Keynesian consumption function?”. Yes, it is true that HANK models do have some old Keynesian features—but with their greater attention to dynamics and micro detail, they also provide many new insights.
For instance, in our “Intertemporal Keynesian Cross” paper, my coauthors (Adrien Auclert and Ludwig Straub) and I show that heterogeneous-agent models give rise to a dynamic version of the old, static Keynesian cross. Spending today is affected by income today, but also (to a lesser degree) by income yesterday and anticipated income tomorrow—and these “intertemporal MPCs” can be disciplined by micro data. It turns out that in this dynamic environment, deficit-financed spending has a larger and more persistent aggregate demand effect.
Crucially, this effect doesn’t disappear the instant that the government stops spending, as one might predict from an old Keynesian consumption function where only “current income” appears. Instead, it persists as long as excess savings remain on the balance sheets of high-MPC households—which can continue for a while, since the consumption boom itself leads to higher incomes. (In a short paper, “The Trickling Up of Excess Savings”, we discuss this mechanism in a more stylized way.)
We originally released our paper in 2018, and I think its prediction of a large and persistent demand effect from deficits fared quite well in the post-pandemic era. The canonical old Keynesian consumption function might predict the “large effect” part, but not the “persistent” part (why did huge deficits in 2020-21 produce elevated demand that continued long after?). It also doesn’t give the same insight into how demand from excess savings eventually dissipates—through those assets eventually “trickling up” to low-MPC households, and also leaking abroad to foreigners through trade deficits (as we covered in a recent Macro Annual paper).
A lot of important mechanisms underlying recent macro events require at least some form of distribution. For instance, when it is easy to borrow against real estate, either because of loose credit conditions or surging house prices, we tend to see a consumption boom—because many of the people borrowing at the margin have much higher MPCs than the representative agent. This is one reason why inflation feeds on itself: higher prices increase home equity, creating more room to borrow and spend (see Auclert 2019’s “Fisher channel”). On the other hand, when credit constraints suddenly tighten, like they did in the late 2000s, it’s contractionary for the same reason (see Guerrieri Lorenzoni 2017, one of the earliest HANK papers). For these mechanisms, it’s not enough to add current income to the consumption function—you need to think in more detail about balance sheets and heterogeneity.
And monetary policy, a focus of the early HANK literature, is a place where I think the synthesis between old and new is especially useful. Kaplan, Moll, Violante 2018’s big point was that much of the effect of rate cuts on demand is “indirect”, coming through income-consumption feedbacks triggered by a smaller “direct” effect. This has a certain old Keynesian flavor. But since it’s situated in a modern, dynamic model, it’s consistent with newer ideas that we now view as absolutely essential—for instance, that it’s the full forward-looking path of rates that matters, not just the current policy rate. (Even as HANK models become more behavioral, this will persist: forward-looking traders set long-term yields in anticipation of monetary policy, and then those yields transmit to the real economy.)
The bottom line is that while HANK certainly has old Keynesian features, it owes a lot to “modern” macroeconomics as well—creating what I believe is a new and very valuable synthesis, one that speaks to both micro data and macro current events in a compelling way.
This comment has been removed by the author.
ReplyDeleteThanks for a great post!
ReplyDeleteThere is a problem in assuming that "all agents in the economy are identical". If all agents are identical--i.e., same preferences, same utility functions, then there will be no exchange in the economy.
ReplyDelete"Moreover, the representative agent utility function and representative firm production function need not look anything like those of any particular individual person and firm. If I have power utility and you have quadratic utility, the economy behaves as if there is a single consumer with something in between. "
ReplyDeleteif that is the case, is there any theorem on the mapping from the risk aversion coefficients of the cross section of households to the risk aversion coefficient of the representative agent? Should we expect the mean or what? Had not we known about this mapping, how could one assert the implied risk aversion from CCAPM is way too high? Very curious.