Is More Peer Reviewing the Answer?

April 21st, 2013 at 9:02 am

In the midst of the Reinhart-Rogoff meltdown, a commenter was aghast to learn that their paper was not peer reviewed.* She asked, reasonably, how could the newspapers report findings that had not gone through that process?

It’s a fair question, and I should expand on the too glib remarks from my post:

So the answer is to only accept peer-reviewed work as economic knowledge, right?  Nope.  That would be a) too limiting, and b) wouldn’t advance the epistemological cause as much as you think.  Peers have their own sets of biases, particularly as gate keepers.

First, had R&R gone through the peer-review process, I’m fairly confident that a) the spreadsheet error would NOT have been found, but b) the paper would have been sent back to them for failing to provide even a cursory analysis of the possibility of reverse causality (slower growth leading to higher debt/GDP ratios vs. the R&R claim of the opposite).  Re “a,” peer reviewers do not routinely replicate findings, though they should when possible (more work these days is with proprietary data sets which cannot legally be shared).

But given “b”—this influential work would perhaps not have seen the light of day without significant revision—how is it that I’m not calling for more peer reviewing?

That’s where the “too limiting” part comes in.  First, a lot of what’s important in economics is fairly simple analysis of trends—descriptive data—without the behind-the-scenes number crunching of the type R&R did.  You learn a lot about austerity, for example, by simply plotting GDP growth rate of countries engaged in it, or wage inequality by observing the movements of different wage percentiles over time.  Looking back on my own posts, you’ll see employment plotted against employment, real wage trends, growth rates…none of which take hardly any data manipulation and none of which depend on choices that would concern a reviewer.

So to insist that everything gets peer reviewed, including the presentation of descriptive data published by reliable sources (e.g., BLS or BEA), would be to raise the bar unnecessarily high.  Of course, that’s not to say that descriptive data presenters can’t make mistakes, and we do.  So in the best of all possible worlds, it would be better if such work was checked by peers before it was publicized, even on blogs.  Which brings me to the next problem.

PR takes a long time (in my experience, six months to a year), and policy makers and reporters will seek results more quickly.  If newspapers had rules that they would only print peer-reviewed findings, there would be a long lag between, say, the release of an employment report and analysis of the results.

Then there are the gatekeepers’ biases, as mentioned above.  I’ve published a precious few peer-reviewed papers,** but those more active in that world tell that papers that challenge conventional wisdom have trouble getting very far, regardless of quality.  And since many aspiring economists are intimately familiar with the biases of the gatekeepers, they are hesitant to test those boundaries for fear of not making it into the important journals.

So, what’s the answer?  The media gatekeepers themselves have to be highly vigilant, especially when the results they’re publishing are not transparent.  R&R’s work, for example, involving many different countries over many years, with results put into seemingly arbitrary bins (debt/GDP<30%, 30%-60%, etc.) might be a flag that would lead a non-technical reporter to ask a lot of their sources what they thought before running with it.

In a sense, I’m suggesting more on-the-spot peer reviewing when results are complex, counter-intuitive, or, as in R&R’s case, directional causality is being invoked.  This risks the dreaded “he-said, she-said” which can be off-putting to readers, and it can easily be overused.   For example, there’s nothing complex, counter-intuitive about work showing, e.g., high unemployment associated with weak wage growth or higher poverty, so a non-PR’ed paper making such connections might invoke less scrutiny and a newspaper write-up of it would not obviously require alternative explanations.

But really, and unfortunately, at the end of the day, when it comes to economics reporting of results, you have to know who to trust.  The WSJ editorial page, for example, is often not trustworthy in their use and choice of findings.  The NYT’s, much more so.  (I should and will find some examples.)  It’s better with reporters (vs. editorial writers) but there too you’ll find folks with thumbs on scales.  In fact, one of the things I do here (and many other econ-bloggers do the same) is try to catch problems and elevate articles that get it right.

And then there’s the fact that facts don’t matter anywhere near as much as they should anyway these days, a much bigger problem that cannot be solved by any amount of peer reviewing.

*This confused some aficionados, because it appeared in the peer-reviewed journal American Economic Review, but in the “Papers and Proceedings” edition, which is not PRed.

**Like most in my field of think tank work, anything beyond a blog, especially with serious number crunching, is reviewed by as many outsiders as I can get to read it, but the difference is that they don’t have the final say on publication.

Print Friendly, PDF & Email

10 comments in reply to "Is More Peer Reviewing the Answer?"

  1. save_the_rustbelt says:

    The real problem is economists and lawyers have too much influence on politics, on politicians and on public policy.

    These two groups are assumed to be able to speak on just about any issue and Washington is infested with both.

    I have worked for many decades on some policy areas (health care, taxation, small business) and the over generalized baloney we hear is amazing.

    If I had a Ph.D. in econ and had never spent a minute in health care work I would have more public policy cred than I do now (I have a lot of cred with the people who actually provide health care).

    Anyway Jared you are still my favorite left-of-center economist. Keep up the good work.


  2. Brad F says:

    WSJ covers same topic this week (feasibility of peer review). The blog primes subject, actual article behind firewall:
    http://blogs.wsj.com/numbersguy/reinhart-rogoff-the-math-errors-that-slip-through-the-crack-1232/

    Brad


  3. Tom in MN says:

    In engineering, peer-reviewed publications are really the only ones that count. I’ve been a technical editor and with electronic submission and review, peer review speed is limited to how fast you can get the reviewers to do their part — which is the hardest part of the process, as everyone is busy. But even 6 months is long for this process, I used to give the reviewers about a month to get their reviews done. Peer review is not perfect, but have you got a better idea?

    With the growth of on-line journals that charge authors to be published, readers need to be even more careful than ever to check that what they are reading has been reviewed.

    It also seems to me that papers that claim a first order effect (expansionary austerity) counter to fundamental models had better have a simple model-type explanation or what they are saying is very likely to be wrong. Cold-fusion and other too good to be true claims fall into this category.


  4. Yastreblyansky says:

    Evidently proprietary data did more to protect these bad findings than peer review would have done. In many other disciplines (certainly linguistics, which I know a little about) the trend is to make data as open as possible. I understand why that’s not going to happen in pharmacology, say, but it’s a shame, and a great shame in economics (where I suppose there’s no issue of patent protection to justify it, just greed to squeeze as many publications as one can out of some set of facts before somebody else does). There really ought to be a wikinomics host or commons for unprotected economic data and an advantage to using it and contributing to the public discourse.


  5. Chris G says:

    > Re “a,” to my knowledge, reviewers do not routinely try to replicate findings, though they should, when possible (more work these days is with proprietary data sets which cannot legally be shared).

    I’ve reviewed dozen of papers. Not once have I considered attempting to replicate the authors’ results. (NB: Subject matter has ranged from optics to physics to statistics, i.e., nothing even remotely related to economics.) Not that replication wouldn’t be a good idea – it would – but, as an unpaid reviewer, I might put in 4-6 hours per paper. That’s about as much free time as I’m willing to commit. I suspect having to replicate results would be a big multiplier on that figure. And that’s not gonna happen if I’m doing reviews in my free time. When I review a paper look for factual errors, errors in logic, and appropriate citation of prior and related work. If the paper proves to be significant then I figure things akin to R&R’s coding errors will eventually come out in the wash.

    That said, I did come across a paper in an un-peer-reviewed conference proceeding where several of the figures presented results which didn’t pass the laugh test. It was apparent that the person who processed the data had no idea what they were doing. (The fact that the paper took some pot shots at my work and that one of the authors was my former boss is… amusing… but it’s also beside the point.) I was tweaked enough to send an email to the IG at the government agency that sponsored the work suggesting that they look into it. For better or worse, the email bounced back to me. Invalid address. The IG at [unnamed DoD agency] doesn’t post a valid email for submitting concerns. Nice.


  6. Smith says:

    Really? Reinhart-Rogoff is an issue? How about considering that if there was no Reinhart-Rogoff error, or no Reinhart-Rogoff paper, or if Reinhart and Rogoff had in fact never even been born, practically nothing would be different. I offer as evidence F.D.R.’s 1932 campaign to balance the budget in the face of deep depression, subsequent efforts, past and present, to deny the effects of deficit spending and fiscal stimulus to speed recovery during the Great Depression and lesser depressions, the forceful opposition of Republicans to the the 2009 stimulus package which predated Reinhart-Rogoff, European focus on cutting deficits by late 2009, a misreading of history by Democrats who thought the 90’s validated belief in the confidence fairy (Summers, Geithner, and other Rubinistas) see also impression of media’s VSP (very serious paper):
    http://www.nytimes.com/2009/01/29/us/politics/29obama.html
    “President Bill Clinton in 1993 had to rely solely on Democrats to win passage of a deficit-reduction bill that was a signature element of his presidency,” the reason Summers blocked a larger stimulus.

    How about considering Reinhart-Rogoff is now becoming an easy excuse for VSPs (Very Serious People and Papers) to use in assigning blame for their belief in austerity. They can say they believed a disputed paper and ignored the lessons of the Great Depression and 75 years of economic research because there was an Excel spreadsheet error. If you believe that, I have some mortgage backed securities for land in Florida I’d like to sell you.

    More germane to this blog, here’s another noted economist’s take on peer review:
    http://krugman.blogs.nytimes.com/2012/01/17/open-science-and-the-econoblogosphere/

    And in case any OTE readers missed this:
    http://krugman.blogs.nytimes.com/2013/04/21/destructive-creativity-2/
    “Remember how Romer and Bernstein were savaged for assuming a multiplier of around 1.5? Four years later, after much soul-searching from the IMF about why it underestimated the costs of austerity, estimates seem to be converging on a multiplier of … about 1.5.”


  7. Kevin Rica says:

    The WSJ editorial pages are just not worth reading. But the WSJ news section is first rate.

    The NYTimes news section is terribly slanted whenever politics intrudes. Take the following passage from a NYT article complaining that the immigration service was looking for facts in adjudicating a suspicious asylum claim.

    “The subpoena, which New York City school officials say is highly unusual here, has raised alarm among some immigration lawyers and civil libertarians who say they fear that the federal government is opening a new front in immigration enforcement, in a city where officials have staunchly defended immigrant rights.”

    A few months later, the case against Dominique Strauss-Kahn fell apart when it quickly became apparent that his accuser had come to this country with a blatantly bogus asylum claim. And the NYTimes solemnly professed shock!

    The press just picks and choses the articles that fit their ideological agenda while thousands more articles, both better and worse, pass unnoticed.

    “Still a man hears what he wants to hear and disregards the rest ” — Simon and Garfunkel


  8. Laureen Cook says:

    Thanks for the daily blogs and your musical interlude picks. I try to remember the first time I heard the music and the events around it. Oh and yes it has made me a bit more fond of my grandchildrens music to.


  9. Bob says:

    More peer review is the ONLY answer. I am a biomedical scientist with over 40 published peer reviewed papers. They are better work because of it.

    If peer review is the exception rather than the rule in the economics profession then this is a serious issue..no matter if the work is descriptive or quantitative in nature. If there is past knowledge that must be cited and addressed, it is the reviewer’s job to demand the paper include it. If there are calculations or data gathered to support the paper’s conclusions, it is the reviewer’s and the journal’s job to require that it be freely available to the scholarly community. Yes, this takes time, and yes, peer review is imperfect but it does set an important standard. Anything less is not scholarship…it is more like a term paper.


  10. Kevin Rica says:

    If you really want to know why economics doesn’t work any more, read the greatest economic forecast ever made in Wassily Leontief’s “Theoretical Assumptions and Nonobserved Facts.” (The 1971 AEA Presidential Address).

    There Leontief noted: “Continued preoccupation with imaginary, hypothetical, rather than with observable reality has gradually led to a distortion of the informal valuation scale used in our academic community to assess and to rank the scientific performance of its members. Empirical analysis, according to this scale, gets a lower rating than formal mathematical reasoning. Devising a new statistical procedure, however tenuous, that makes it possible to squeeze out one more unknown parameter from a given set of data, is judged a greater scientific achievement than the successful search for additional information that would permit us to measure the magnitude of the same parameter in a less ingenious, but more reliable way. This despite the fact that in all too many instances sophisticated statistical analysis is performed on a set of data whose exact meaning and validity are unknown to the author or rather so well known to him that at the very end he warns the reader not to take the material conclusions of the entire “exercise” seriously.

    A natural Darwinian feedback operating through selection of academic personnel contributes greatly to the perpetuation of this state of affairs. The scoring system that governs the distribution of rewards must naturally affect the make-up of the competing teams. Thus, it is not surprising that the younger economists, particularly those engaged in teaching and in academic research, seem by now quite content with a situation in which they can demonstrate their prowess (and incidentally, advance their careers) by building more and more complicated mathematical models and devising more and more sophisticated methods of statistical inference without ever engaging in empirical research. Complaints about the lack of indispensable primary data are heard from time to time, but they don’t sound very urgent. The feeling of dissatisfaction with the present state of our discipline which prompts me to speak out so bluntly seems, alas, to be shared by relatively few. Yet even those few who do share it feel they can do little to improve the situation. How could they? ”

    Now the inmates are running the academic asylum. “the younger economists” that Leontief wrote of in 1971 are now the Chairmen emeritus of our most prestigious, but least imaginative universities. If they had any understanding of what Keynes said, it’s been lost in a pretentious mathematical fog.

    Three-quarters of a century after Keynes, the obvious Keynesian implications of Bernanke’s “Global Savings Glut” are totally over their heads. The only thing that they can think of is that “we must need more fiscal stimulus.” However, the underlying reason and logic is lost to them and the policy is clumsy and misdirected. If it’s not a simple formula, they are lost.


Leave a Reply

Your email address will not be published.