Friday, September 22, 2017

A paper, and publishing

Even at my point in life, the moment of publishing an academic paper is a one to celebrate, and a moment to reflect.

The New-Keynesian Liquidity Trap is published in the Journal of Monetary Economics -- online, print will be in December. Elsevier (the publisher) allows free access and free pdf downloads at the above link until November 9, and encourages authors to send links to their social media contacts. You're my social media contacts, so enjoy the link and download freely while you can!

The paper is part of the 2012-2013 conversation on monetary and fiscal policies when interest rates are stuck at zero -- the "zero bound" or "liquidity trap." (Which reprised an earlier 2000-ish conversation about Japan.)

At the time, new-Keynesian models and modelers were turning up all sorts of fascinating results, and taking them seriously enough to recommend policy actions. The Fed can strongly stimulate the economy with promises to hold interest rates low in the future. Curiously, the further in the future the promise, the more stimulative.  Fiscal policy, even totally wasted spending, can have huge multipliers. Broken windows and hurricanes are good for the economy. And though price stickiness is the central problem in the economy, lowering price stickiness makes matters worse. (See the paper for citations.)

The paper shows how tenuous all these predictions are. The models have multiple solutions, and the answer they give comes down to an almost arbitrary choice of which solution to pick. The standard choice implies a downward jump in the price level when the recession starts, which requires the government to raise taxes to pay off a windfall to government bondholders. Picking equilibria that don't have this price level jump, and don't require a jump to large fiscal surpluses (which we don't see) I overturn all the predictions. Sorry, no magic. If you want a better economy, you have to work on supply, not demand.

Today's thoughts, though, are about the state of academic publication.

I wrote the paper in the spring and summer of 2013, posted it to the internet, and started giving talks. Here's the story of its publication:

September 2013. Submitted to AER; NBER and SSRN working papers issued. Blog post.
June 2014. Rejected from AER. 3 good referee reports and thoughtful editor report.
October 2014. Submit revision to QJE.
December 2014. Rejected from QJE. 3 more thoughtful referee reports and editor report.
January 2015. Submit revision to JME.
April 2016. Revise and resubmit from JME. 3 detailed referee reports and long and thoughtful editor report.
June 2016. Send revision to JME
July 2017. Accept with minor revisions from JME. Many (good) comments from editor
August 2017. Final revision to JME
September 2017. Proofs, publication online.
December 2017. Published.

This is about typical. Most of my papers are rejected at 2-3 journals before they find a home, and 3-5 years from first submission to publication is also typical. It's typical for academic publishing in general. Parts of this process went much faster than usual. Three months for a full evaluation at QJE is fast. And once accepted, my paper sped through the JME. Another year or two in the pipeline between acceptance and publication is typical.

Note most journals count average time to decision. But what matters is average time to publication of the papers they publish, and what really counts is average time to publication in the journal system as a whole.

Lessons and thoughts?

  • Academic journal publication is not a useful part of communication among researchers or the communication between research and policy. 

Anyone doing research on zero bound in new-Keynesian models in the last 4 years, and carrying on this conversation, interacted with the working paper version of my paper (if at all), not the published version. Any work relying only on published research is hopelessly out of date.

Interest rates lifted off the zero bound quite a while ago, so in the policy conversation this publication at best goes into the shelf of ideas to be revisited if the next recession repeats the last one with an extended period of zero interest rates , and if we see repeated invocation of the rather magical predictions of new-Keynesian models to cure it. If the next recession is a stagflation or a sovereign debt crisis, you're on your own.

Rather than means of communication,

  • Journal publications have become the archive, 

the ark, the library, the place where final, and perfected versions of papers are carved in stone for future generations. (Some lucky papers that make it to graduate reading lists more than 5-10 years after their impact will be read in final form, but not most.)

And this paper is perfected. The comments of nine very sharp reviewers and three thoughtful editors have improved it substantially, along with at dozens of drafts.  Papers are a conversation, and it does take a village.  The paper also benefitted from extensive comments at workshops, and several long email conversations with colleagues.

The passage of time has helped as well. When I go back to a paper after 6 months to a year, I find all sorts of things that can be clearer. Moreover, in the time between first submission and last revision, I wrote four new papers in the same line, and insights from those permeate back to this one.

So, in the end, though the basic points are the same, the exposition is much better.  It's a haiku.  Every word counts.

But such perfection comes at a big cost, in the time of editors and referees, my time, and most of all the cost that the conversation has now moved on.

The sum length of nine referee reports, four reports by three editors, is much longer than the paper. Each one did a serious job, and clearly spent at least a day or two reading the paper and writing thoughtful comments. Moreover, though the reports were excellent, past the first three they by and large made the same points. Was all this effort really worthwhile? I think below on how to economize on referee time.

Of course, for younger people

  • Journal articles are a branding and sorting device. 

Many institutions give tenure, chairs, raises, and other professional advancement based at least in part on numbers and placement of publications. For that purpose, timeliness of publication is less of a problem, but with a six year tenure clock at many places and five year lags, timeliness of acceptance the quality rating is a big problem. The sorting and branding function isn't working that well either. But having journals outsource quality evaluation was always an imperfect institution.  Maybe we should just have star ratings instead -- seriously, start up a website devoted to crowd-sourcing working paper evaluation. Or, perhaps tenure committees will have to actually start reading papers. I don't think the journals see this as their main function either. They're set up to publish papers, not judge people's tenure, so improving journals as a tenure granting mechanism will be a hard sell.

There is some good news that this data point represents, relative to state of journal publishing 15-20 years ago. (See Glenn Ellison's superb "The slowdown in the economics publishing process," JSTORundated, one of my proudest moments as a JPE editor.)

  • Journals are doing fewer rounds, more desk rejection, more one round and up or out.  

Journals had gotten in to a rut of asking for round after round of revisions. Now there is a strong ethic of either rejecting the paper, or doing one round of revisions and then either publishing with minor changes or not. Related,

  • Journal editors are more decisive. 

Journal editors have become, well editors.  The referees provide advice, but the editor thinks about it, decides which advice is good and not, and makes the final call. Editors used to defer decisions to referees, which is part of the reason why there were endless revisions.  This change is very good. Referees have little incentive to bring the process to a close, and they don't see the pipeline of papers to the journal. They are not in a good position to find the right balance of perfection and timeliness.

In my case, editors were very active. The referees wrote thoughtful reports, but largely made similar points. In fact, the strongest advice to reject came at the JME. But the AER and QJE editors were not impressed in the end by the paper, and the JME editor was.

So, with this state of affairs in mind, how might we all work to improve journals and the publication process?

I will take for granted that greater speed, and making journals more effective at communication and not just archiving and ranking is important. For one reason, to the extent that they continue to lose the communication function, people won't send articles there. Already you can notice that after tenure, more and more economists start publishing in conference volumes, invited papers, edited volumes, and other outlets. (blogs!) The fraction willing to take on this labor of love for journal publication declines quickly with age.  Research productivity and creativity does not take quite such a parallel decline. (I hope!)

Always the free-market economist, I note that conference volumes, edited volumes, and solicited papers in regular journals seem to be healthy and increasing, which is a natural response to journal slowdown. This is a way to get papers in print more quickly.  In the early days of the internet I had a rule never to publish in volumes, as they disappeared to library shelves and could not be found electronically. Now many of them have solved that problem. The NBER macro annual and Carnegie-Rochester conferences are good examples. The Review of Finance editor recently solicited my "Macro-finance" essay which therefore sped through publication. My active editors are also often taking a more active role in soliciting promising working papers. This helps to break the editor-to-paper match. But this isn't an ideal state of affairs either. Conference volumes tend towards commissioned work. Original work by people out of the social network of the conference organizers and editors has a tough time. (The market responds, organize more conferences.)

Suggestion one:

  • Adopt the golden rule of refereeing

Around any economist cocktail party, there is a lot of whining that journals should do x y and z to speed things up. I start with what you and I can do. It is: do unto others as you would have them do unto you. If you complain about slow journals, well, how quickly do you turn around reports?

My recommendation, which is the rule I try to follow: Answer the email within a day. Spend an hour or two with the paper, and decide if you will referee it or not. If not, say so that day. If you can give a quick reaction behind your reason, that helps editors. And suggest a few other referees. Often editors aren't completely up to date on just who has written what and who is an ideal fit. If you're not the ideal fit, then help the editor by finding a better fit, and do it right a way.

If you agree to do a report, do it within a week. If you can't do it this week, you're not likely to be able to do it 5 weeks from now, and say no.

More suggestions:

  • Reuse referee reports
Do we really need nine referee reports to evaluate one paper? I always offer editors of journals to whom I send rejected papers the option of using the existing referee reports, along with my response as to how I have incorporated or not their suggestions. Nobody has ever taken me up on this offer. Why not? Especially now that editors are making more decisions? Some people mistakenly view publication as a semi-judicial proceeding, and authors have a "right" to new opinions. Sorry, journals are there to publish papers. 

Why not open refereeing? The report, and author's response, go to a public repository that others can see. Why not let anyone comment on papers? Authors can respond. Often the editor doesn't know who the best person is to referee a paper. Maybe a conference discussant has a good insight. At least one official reviewer could benefit from collecting such information. Some science journals do this. 

Some people would hate this. OK, but perhaps that should be a choice. Fast and public, or slow and private. 

While we're at it, what about
  • Simultaneous submission. Competition (heavens!)  

Journals insist that you only send to one journal at a time. And then wait a year or more to hear what they want to do with it. Especially now that we are moving towards the editor-centric system, and the central question is a match with editor's tastes, why not let journal editors share reviewer advice and compete for who wants to publish it? By essentially eliminating the sequential search for a sympathetic editor, this could speed up the process substantially.

I don't know why lower-ranked journals put up with this. It's the way that the top journals get the order flow of best papers. Why doesn't another journal say, you can send it to us at the same time as you send it to the AER. We'll respect their priority, but if they don't want it we will have first right. The AER almost does this with its field journals. But the JME could get more better papers faster by competing on this dimension.

The journals say they do this to preserve the value of their reviewer time. But with shared or open reviews, that argument falls apart.

We advocate competition elsewhere. Why not in our own profession?

Update: An email correspondent brings up a good point:

  • Journals should be the forum where competing views are hashed out. 
They should be part of the "process of formalizing well argued different points of views --  not refereeing "the truth." We dont know the truth. But hopefully get closer to it by arguing. [In public, and in the journals] The neverending refereeing [and editing and publishing] process is shutting down the conversation."

When I read well argued papers that I disagree with, I tend to write "I disagree with just about everything in this paper. But it's a well-argued case for a common point of view. If my devastating report does not convince the author, the paper should be published, and I should write up my objections as a response paper." 


  1. Fast and public, or slow and private? Those with reputational capital in the field would never choose the former because it would show that many top journal decisions are made because of politics, cronyism, and even nepotism, not meritocracy. Increased speed of the process would greatly help those on the clock but would inconvenience those who can't be bothered to complete a single referee report in a month. The process is inefficient, corrupt and will never change. It's amazing that a pragmatist like you doesn't realize that.

    1. I barely let this through. Your assertions about politics cronyism and nepotism are essentially baseless insults. And I have a little repetitional capital, and am advocating for fast and public.

    2. And it is amusing that someone slinging such charges hides behind "unknown"

    3. Note: The author replied, but I'm not going to put up accusations that named people had papers published by unethical means.

  2. That is your prerogative, Mr. Cochrane. As long as this type of unethical behavior persists (and it has and it will), no one who benefits from such behavior will want to come out of the shadows as you suggest.

  3. If you click on the unknown writer it opens up a page for a "Tom Shohfi". Not sure if it helps or if it's a real person, but I thought I should let you know.

  4. I strenuously disagree with your assertion that "Journal editors have become, well editors." The primary duty of an editor is publish research on which others can rely. At a minimum, this means that, first, for theoretical papers, theorems have proofs available for inspection by readers. Second, for empirical papers, this means that there exists data and code that reproduce the published results.

    The first point is largely satisfied. The second point is not satisifed by any economics journal of which I am aware. Until editors publish empirical research that is demonstrably reproducible, they are not even beginning to do their jobs.

    1. I disagree about even theory! There is an increasing fashion to put proofs in appendices and not to put the algebra anywhere, meanwhile cluttering up the paper with chit chat. Editors should insist that the actual propositions and proofs be in the paper, and intelligible!

  5. Great post! All editors (and possible referees, among others) should read this.

    One more point would be "anonymous." I cannot find the "good" reason why referees should be anonymous. In acknowledgment, I would like to write "I thank Prof. X for his comments." rather than "I thank an anonymous referee for comments" - though s/he spent a lot of time reading my paper and writing referee reports, by spending part of her/his limited life time, which could be used for leisure...

    (Perhaps we would add time for review to utility function, as one of negative arguments!?)

    1. Well, it is possible to sign your referee reports which I sometimes do but not always. Again, we can always start local rather than just complain what journals should do. Or write the editor and say "I benefitted a lot from the referee reports. If they wish to be identified and thanked, I would be pleased to do so." It's always easier to start small than ask for big rules changes from above.

  6. You said that journals are primarily for publishing results, not helping with tenure decisions. I believe that editors and referees might think so. People in tenure committees probably do not, in particular if I look at ejmr (the typical statement looks like "I need X top-3 and Y fillers for tenure" and so on). This is a serious problem because it seems to distort the original purpose of publishing. It is no wonder then that people complain about the refereeing process and about "cronyism" and the like: there tenure decision is now "outsorced" to an organization that is for them some kind of black box. On the other hand, it also places a heavy burden on journals. They not only publish but are also involved in whether someone can do research at this or at another university.

    1. True in some sense. I, as a Ph.D student, first have to look for the journal with fast refereeing process. I can get the accept/reject response in one or two month(s) for few journals. It is very helpful to young researchers like me, who needs anyway publications, to get the (dream!) job as a professor of economics.

      Of course, I wish to submit to the top ones (AER, QJE, JME,...) but typically it takes a lot of time. If I submitted to AER, I would get that response after job market season is over! In my opinion, this is the severe constraint for young Ph.D students, including me.

      I'm Japanese, so am talking about the job situation in Japan. I don't know whether it is true in the U.S. or Europe or somewhere else.

  7. Good stuff. This also happens in some PHD programs. I dropped out of a PHD program because of seemingly endless time delays, advisor absences and trivial challenges to my topic. To be fair, some of their questions improved my work. As for competition, maybe you and a few like minded colleagues publish your own journal. You have a head start with your text book.

  8. It could be useful to open the public repository that Professor Cochrane suggests to further include codes and data that are used by submitted papers.

  9. I think the whole anonymity arena is completely lacking in credibility . Most papers submitted to journals are on SSRN in any case and are easily traceable to the authors . In the case of most submissions to journals, based on the list of associate editors one can reasonably guess where the manuscript will go to . If you have the funds to present the funds in the relevant institutions you will get feedback from the relevant associate editor prior to journal submission. Adopting such a strategy certainly narrows the odds.

  10. First, Professor Cochrane congratulations on your new published paper.

    My reflections on the refereeing process:
    So getting a paper accepted is hard work for you - and you are a very well-known economist at a top business school, former President of AFA, with a long list of innovative and well-cited papers in JPE, Journal of Finance, AER and RFS, etc!

    I just started in this profession (got my PhD last year and started at a much lower ranked business school in January). I just got my Job Market paper accepted at RFS (after rejections at JPE and Journal of Finance). There were days when I never thought I would get it through even after reasonably positive reports at the first R&R stage at RFS. It was tough for me too (but I guess nothing worthwhile is meant to be easy).
    My paper was accepted less than three years after I started working on it so all-in-all I count myself lucky to have got it through so quickly.

    My one suggestion? Why doesn't every journal charge high-ish submission fees (my school will pay this expense anyway though I understand many schools don't) and then pay their referees a share of this money. JFE does this and they do (so I have been told) get quick turn-around times (median is 26 days (data from JFE website)). The profession has to reward referees who put in effort to write good reports. At the moment, the profession rewards richly those who publish in the top 3 or 4 finance journals and the top 4 or 5 econ journals. One publication alone adds thousands of dollars per year to a junior academic's lifetime earnings. But for every paper written, we need an average of approximately 2.5 referees per submission. If the average paper is rejected 4 times before acceptance, we need 10 referees per paper. But the profession doesn't reward an academic for writing 10 thoughtful referees reports. Even writing 100 thoughtful referees reports counts for little when up for tenure. How do we find a way to encourage academics and those who sit on tenure committees that refereeing is truly vital and must be rewarded. To me, paying them more (JFE style) is at least worth trying? Thoughts, please?

    Annals of Statistics (a top math journal) now tells referees to make one of three recommendations: Reject, Accept or R&R but with the proviso that the referee should only recommend an R&R if the authors can reasonably do the required revision within 7 days. If not, the referee must recommend Reject. That also sounds an idea worth trying as many papers go to three or four revisions. If the paper is basically sound and the research topic is of sufficient interest, three or four revisions sounds too many (think of those people where the tenure decision date is imminent).

    By the way, I saw your (2014) Journal of Finance (February issue) paper "A Mean-Variance Benchmark for Intertemporal
    Portfolio Theory" was Initial submission: February 22, 2008; Final version received: June 19, 2013.
    How many revisions in the intervening 5 years and 4 months, if you don't mind me asking, please?

    1. Yes, we need to make the process much more publish or reject. I should have added, when you're a referee, don't fall for the trap of "here is how to write a paper on this topic." If they haven't done it already, reject. If it is publishable but not perfect, publish. That's the annals of statistics models. Econ and finance journals are moving that way.

      Alas, high referee fees are not enough to motivate economists. Refereeing will never pay the same as consulting and will always be an effort of social responsibility. Journals could, however, do a much better job of keeping score. Those who referee carefully and promptly could get better treatment as authors. Journals do a remarkably bad job of keeping track of who owes who favors, I'm sad to report. Usually editor a doesn't even know you just did 5 reports for editor b. Writing good reports does get you invited to be an editor, and perhaps that is an inducement.

      I'd have to research the history of the mean-variance benchmark. I don't want to say anything publicly unless it's accurate.

      There is always the chance my papers aren't that good, which is why they take a long time to get published!

  11. So I'm the Department Editor at Management Science (Strategy). I definitely agree with each of these suggestions. (Some of them I wrote about in my book "Scholarly Publishing and its Discontents").

    In particular, we now push hard to speed up the process. That, of course, means more desk rejects but that is related to other matters (see below). But the most important thing is trying to have only one round of revisions.

    In addition, I am particular to debate and accepting papers that provoke even if they do not completely resolve a question. Sometimes that isn't possible. Sometimes, the author does not have the skill set. We should accept the division of labour in publications and not require theorists to be empiricists as well and vice versa.

    But there are barriers. I want to share referee reports with other journals. Other journals do not want to share with me. I know that is because it sends a signal: why do we want to be perceived as 'lower down' the status hierarchy? The answer is that we have a hierarchy so we may as well use it. It benefits all. If we could do that I would be happier having fewer desk rejects as the reports would not be wasted in the process and time would not be lost.

    Absent the efficient approach, simultaneous submissions are a good idea. It is likely that the same referee might get the paper but then perhaps we can get to the efficient system by default.

  12. 1. Econ papers are way too long. Every paper has to start from scratch - I think because we have no basic scientific model.
    2. Do what John says. Referees do a quick read and then decline the paper or do it almost immediately. A pile of papers to be refereed may make you feel like somebody, but when I see it it sickens me.
    3. If you are editor and your referees are sitting on a paper, referee it yourself.
    4. Promise your referees to send their work to people who do as good a job as they.

    1. Well said Bob! Your number 4 needs more attention. Journals are remarkably bad at institutional memory. If you do a good job for a journal, usually that fact only resides in one editor's memory and not very long at that. And your number 4 implicitly states the darker side of the golden rule, tit-for-tat. Do you take a year to return reports? Don't expect your own reports back any sooner!

    2. I like 4. It could even be quantified: the editor scores your referee report for quality and timeliness. The journal tracks your average score as a referee. Then when you later submit your own work, you get referees from a similar score tier.

    3. Keep in mind the issue really isn't referees, it's editors. Referee score tiering isn't going to work, as you need people with the right expertise. But editors can be more or less aggressive in enforcing the stated deadlines -- usually 4 to 6 weeks -- and in handling the paper themselves. Many delays are with editors not referees.

  13. Dear Prof.,

    I know this is not the point of the post, but I am happy to see that it is a long way to publication even for my "idol". It makes the rejections more bearable.

    Thanks for keeping it real! :)

  14. I am humbled and almost glad to know that even first-rate economists such as yourself face the same length of publishing cycle as others like myself. As a comment on your post, I started wondering how this lengthening of publishing cycle over years could be interacting with unrealistic tenure standards to influence research outcomes for economists working at universities with larger teaching loads. By larger I mean anywhere from 3-6 courses per semester.

    The standard advice I received when I started my job, was that in order to get 3-4 publications in the 6 year tenure clock, one needs at least 2-3 papers under review at given point in time. With 3-5 years of publishing cycle, one would need even more under review. Given the insane teaching load (3-5 courses per semester) at some universities, having that kind of research output is next to impossible. Casual observation and experience suggest emergence of multiple equilibria given these work conditions:

    1. More coauthored papers: The only way to churn papers speedily is to collaborate. A good research question- What is the optimal number of coauthors required to produce 4-6 papers in a 6 years tenure clock and 3-5 years of publishing cycle?
    2. More teaching related papers: Use your econometric skills to write control-treatment kind of papers on your teaching. Is flipped classroom better than face to face traching, or in general is treatment A better than no treatment and then A could be your choice of ‘new teaching method’!
    3. Do both of the above.
    4. Be delusional and try to publish actual economics research in better journals while competing with colleagues that churn out teaching based papers. End result: you can guess!

    There is a counter argument of course-PhD economists get sorted through the job market assigning them to the universities that demand their skill mix. Even then any of the above equilibria is possible because many of those universities want to masquerade as research universities and hence design tenure standards that seldom reflect the realities of publishing in economics.

    Apologies for tangential ramblings. Excellent post as usual!

  15. Excellent and interesting blog post. It seems to me, though, that the main function of the journals, besides the tenure aspect, is to decide how often a paper should be cited. A paper may deserve a citation if it is i) directly related to your paper by studying the same topic (both papers studying the effect of X on Y) or ii) indirectly related to your paper (e.g., by studying the effect of Z on Y instead of X on Y). In the case of ii), you may want to cite the paper in a sentence like “More broadly, my paper relates to the literature on Y.” But except for the very latest research on the topic, you can’t cite everything that’s been done on Y. For papers that are a bit older, it therefore makes sense to only cite the papers that have been published in e.g. the top 5. This function seems important to me.

  16. Given that "broken windows and hurricanes are good for the economy", while clearly being a waste of resources (bad for people), how should we be assessing the wisdom of any claim that something or other should be done because it is "good for the economy"? Do economics thought processes have a method for this?


  17. Congratulations, Professor Cochrane! It's very interesting to know the history of this paper's publication (Even a good paper from a famous and influential scholar as you will be rejected several times and take years to get published). Although I have no experience in submitting papers to journals yet, I learned a lot from your insightful thoughts on academic publication.

    As a PhD student and future scholar, how should I stand out in this competitive academic circle (a lot of pressure from job market and tenure)? Is focusing on important and interesting questions a good strategy?

    1. The danger of "important" questions is that they don't often have easy answers. Good research is about answering questions cleanly. Big questions are a dime a dozen. Fluffing up a small answer with pages and pages of why it's important isn't a great idea either. My advice, just keep the paper clean. Answer what you can, as definitively as you can, write the paper to make it easy for the reader to get your point. Even "small" points, well done, and cleanly presented are good research. And you never know -- all of the papers that I wrote thinking "this is a big question" didn't get very far. And papers I thought "small" turned out to be the big hits.

    2. Thank you very much for your valuable advice, Professor Cochrane! It reminds me of your 2008 RFS paper "The dog that did not bark", which we just learned and discussed in empirical asset pricing course half a month ago - it's cleanly presented and convincing. I will take note and do my best to write clean papers.

  18. Superb blog! Gives great insight into the world of review and publishing in finance and econ journal's. Thank you so much professor Cochrane!

  19. This piece is so self-aggrandizing, I wondered if it was parody. A second, cursory reading revealed that it was not so.

    If you cannot fathom why you've encountered these publishijg difficulties (not to argue the academic publishing process is meritorious;) Dr. Cochrane, despite your laudable credentials, well... I suppose we all have blind spots. This one is sizeable.


Comments are welcome. Keep it short, polite, and on topic.

Thanks to a few abusers I am now moderating comments. I welcome thoughtful disagreement. I will block comments with insulting or abusive language. I'm also blocking totally inane comments. Try to make some sense. I am much more likely to allow critical comments if you have the honesty and courage to use your real name.