Academic Publishing: Hating a Trap is Not Enough to Escape the Trap

trapped

There are lots of problems with scholarly publishing, and of course even more with academia as a whole. In this article I focus on two: expensive journals, and ineffective peer review.

Expensive Journals

Our current method of publication has some big problems. For one thing, the academic community has allowed middlemen to take over the process of publication. We, the academic community, do most of the really tricky work. In particular, we write the papers and referee them. But they, they publishers, get almost all the money, and charge our libraries for it—more and more, thanks to their monopoly power. It’s an amazing business model:

Get smart people to work for free, then sell what they make back to them at high prices.

People outside academia have trouble understanding how this continues! To understand it, we need to think about what scholarly publishing and libraries actually achieve. In short:

  1. Distribution. The results of scholarly work get distributed in publicly accessible form.
  2. Archiving. The results, once distributed, are safely preserved.
  3. Selection. The quality of the results is assessed, e.g. by refereeing.
  4. Endorsement. The quality of the results is made known, giving the scholars the prestige they need to get jobs and promotions.

Thanks to the internet, jobs 1 and 2 have become much easier. Anyone can put anything on a website, and work can be safely preserved at sites like the arXiv and PubMed Central. All this is either cheap or already supported by government funds. We don’t need journals for this.

The journals still do jobs 3 and 4. These are the jobs that academia still needs to find new ways to do, to bring down the price of journals or make them entirely obsolete.

The big commercial publishers like to emphasize how they do job 3:selection. The editors contact the referees, remind them to deliver their referee reports, and communicate these reports to the authors, while maintaining the anonymity of the referees. This takes work.

However, this work can be done much more cheaply than you’d think from the prices of journals run by the big commercial publishers. We know this from the existence of good journals that charge much less. And we know it from the shockingly high profit margins of the big publishers, particularly Elsevier.

It’s clear that the big commercial publishers are using their monopoly power to charge outrageous prices for their products. Why do they continue to get away with this? Why don’t academics rebel and publish in cheaper journals?

One reason is a broken feedback loop. The academics don’t pay for journals out of their own pocket. Instead, their university library pays for the journals. Rising journal costs do hurt the academics: money goes into paying for journals that could be spent in other ways. But most of them don’t notice this.

The other reason is item 4: endorsement. This is the part of academic publishing that outsiders don’t understand. Academics want to get jobs and promotions. To do this, we need to prove that we’re ‘good’. But academia is so specialized that our colleagues are unable to tell how good our papers are. Not by actually reading them, anyway! So, they try to tell by indirect methods—and a very important one is the prestige of the journals we publish in.

The big commercial publishers have bought most of the prestigious journals. We can start new journals, and some of us are already doing that, but it takes time for these journals to become prestigious. In the meantime, most scholars prefer to publish in prestigious journals owned by the big publishers, even if this slowly drives their own libraries bankrupt. This is not because these scholars are dumb. It’s because a successful career in academia requires the constant accumulation of prestige.

The Elsevier boycott shows that more and more academics understand this trap and hate it. But hating a trap is not enough to escape the trap.

Boycotting Elsevier and other monopolistic publishers is a good thing. The arXiv and PubMed Central are good things, because they show that we can solve the distribution and archiving problems without the help of big commercial publishers. But we need to develop methods of scholarly publishing that solve the selection and endorsement problems in ways that can’t be captured by the big commercial publishers.

I emphasize ‘can’t be captured’, because these publishers won’t go down without a fight. Anything that works well, they will try to buy—and then they will try to extract a stream of revenue from it.

Ineffective Peer Review

While I am mostly concerned with how the big commercial publishers are driving libraries bankrupt, my friend Christopher Lee is more concerned with the failures of the current peer review system. He does a lot of innovative work on bioinformatics and genomics. This gives him a different perspective than me. So, let me just quote the list of problems from this paper:

Christopher Lee, Open peer review by a selected-papers network, Frontiers of Computational Neuroscience 6 (2012). The rest of this section is a quote:

 • Expert peer review (EPR) does not work for interdisciplinary peer review (IDPR). EPR means the assumption that the reviewer is expert in all aspects of the paper, and thus can evaluate both its impact and validity, and can evaluate the paper prior to obtaining answers from the authors or other referees. IDPR means the situation where at least one part of the paper lies outside the reviewer’s expertise. Since journals universally assume EPR, this creates artificially high barriers to innovative papers that combine two fields [Lee, 2006]—-one of the most valuable sources of new discoveries.

• Shoot first and ask questions later means the reviewer is expected to state a REJECT/ACCEPT position before getting answers from the authors or other referees on questions that lie outside the reviewer’s expertise.

• No synthesis: if review of a paper requires synthesis—combining the different expertise of the authors and reviewers in order to determine what assumptions and criteria are valid for evaluating it—both of the previous assumptions can fail badly [Lee, 2006].

• Journals provide no tools for finding the right audience for an innovative paper. A paper that introduces a new combination of fields or ideas has an audience search problem: it must search multiple fields for people who can appreciate that new combination. Whereas a journal is like a TV channel (a large, pre-defined audience for a standard topic), such a paper needs something more like Google—a way of quickly searching multiple audiences to find the subset of people who can understand its value.

• Each paper’s impact is pre-determined rather than post-evaluated: By ‘pre-determination’ I mean that both its impact metric (which for most purposes is simply the title of the journal it was published in) and its actual readership are locked in (by the referees’s decision to publish it in a given journal) before any readers are allowed to see it. By ‘post-evaluation’ I mean that impact should simply be measured by the research community’s long-term response and evaluation of it.

• Non-expert PUSH means that a pre-determination decision is made by someone outside the paper’s actual audience, i.e., the reviewer would not ordinarily choose to read it, because it does not seem to contribute sufficiently to his personal research interests. Such a reviewer is forced to guess whether (and how much) the paper will interest other audiences that lie outside his personal interests and expertise. Unfortunately, people are not good at making such guesses; history is littered with examples of rejected papers and grants that later turned out to be of great interest to many researchers. The highly specialized character of scientific research, and the rapid emergence of new subfields, make this a big problem.

In addition to such false-negatives, non-expert PUSH also causes a huge false-positive problem, i.e., reviewers accept many papers that do not personally interest them and which turn out not to interest anybody; a large fraction of published papers subsequently receive zero or only one citation (even including self-citations [Adler et al., 2008]). Note that non-expert PUSH will occur by default unless reviewers are instructed to refuseto review anything that is not of compelling interest for their own work. Unfortunately journals assert an opposite policy.

• One man, one nuke means the standard in which a single negative review equals REJECT. Whereas post-evaluation measures a paper’s value over the whole research community (‘one man, one vote’), standard peer review enforces conformity: if one referee does not understand or like it, prevent everyone from seeing it.

• PUSH makes refereeing a political minefield: consider the contrast between a conference (where researchers publicly speak up to ask challenging questions or to criticize) vs. journal peer review (where it is reckoned necessary to hide their identities in a ‘referee protection program’). The problem is that each referee is given artificial power over what other people can like—he can either confer a large value on the paper (by giving it the imprimatur and readership of the journal) or consign it zero value (by preventing those readers from seeing it). This artificial power warps many aspects of the review process; even the ‘solution’ to this problem—shrouding the referees in secrecy—causes many pathologies. Fundamentally, current peer review treats the reviewer not as a peer but as one who wields a diktat: prosecutor, jury, and executioner all rolled into one.

• Restart at zero means each journal conducts a completely separate review process of a paper, multiplying the costs (in time and effort) for publishing it in proportion to the number of journals it must be submitted to. Note that this particularly impedes innovative papers, which tend to aim for higher-profile journals, and are more likely to suffer from referees’s IDPR errors. When the time cost for publishing such work exceeds by several fold the time required to do the work, it becomes more cost-effective to simply abandon that effort, and switch to a ‘standard’ research topic where repetition of a pattern in many papers has established a clear template for a publishable unit (i.e., a widely agreed checklist of criteria for a paper to be accepted).

• The reviews are thrown away: after all the work invested in obtaining reviews, no readers are permitted to see them. Important concerns and contributions are thus denied to the research community, and the referees receive no credit for the vital contribution they have made to validating the paper.

In summary, current peer review is designed to work for large, well-established fields, i.e., where you can easily find a journal with a high probability that every one of your reviewers will be in your paper’s target audience and will be expert in all aspects of your paper. Unfortunately, this is just not the case for a large fraction of researchers, due to the high level of specialization in science, the rapid emergence of new subfields, and the high value of boundary-crossing research (e.g., bioinformatics, which intersects biology, computer science, and math).

Toward solutions

In an upcoming article, I’ll talk about the ‘selected papers network’ that Christopher Lee has developed, and invite you all to try it. If you want to get a sense of where we are heading, read the section of Christopher Lee’s paper called The Proposal in Brief.

This article was copied with permission from John Baez’s blog Azimuth. (Copyright remains the sole right of the author.) Further conversation can be found at the original posting.


Image by LadyDragonflyCC is in the Creative Commons (CC by 2.0)

Comments:

  1. Pingback: Another Week of Global Warming News, June 9, 2013 – A Few Things Ill Considered

  2. It's interesting that it captures your fancy.

    Please consider, then, that many of the critiques of climate science that are valid are not specific to climate science. (I think most of the others are just wrong.)

    The problems crossing disciplines are especially of interest in climate and sustainability questions. The methodologies and habits of a particular community do not necessarily match those of another, which can lead to unwarranted excess of criticism. (To see that it's unwarranted, consider Richard Muller's preliminary bluster vs. his successful replication of one of the most compelling results.)

    So this does not invalidate the work, but it greatly inhibits the transfer of understanding across boundaries. In the words of P G Plauger "an expert in any field is a person who knows enough about what's really going on to be scared" - P. J. Plauger

    It's also a matter of note that the complaint here is that the progress of science is impeded by the journal-oligopoly. The complaint is that it suppresses certain types of useful result, not that it introduces incorrect results.

    I also think there's more here for the P3 community to chew on. Stand by.

  3. The selected papers network is an infrastructure that'll support many forms of open-access 'selection' and 'endorsement' of papers, using the jargon I defined in my part of the blog entry. All sorts of things from discussion groups to 'journals' that merely point at papers on the arXiv and say 'we publish that one' to prize committees to crowd-sourced reviews can be built on top of this infrastructure. The successful ones, in my opinion, will be the ones that reliably deliver 'prestige' to authors that deserve it. But the prestige economy of academia is a subtle business.

  4. I have been reading complaints about peer review and journal practices for close to a decade now.

    I doubt if there's ever been a period when 90% of published work wasn't essentially crap or fluff. I think the famous rule applies here as much as anywhere else. So I don't think these issues will topple science or necessarily threaten the cozy little business academic publishing has--it's an outgrowth of sharp business practice first developed by textbook publishers and it's now serious money, so they will fight for it.

    X archive (real name, please?) and other alternatives need the support infrastructure to make it work. That would involve signed commitments for pro bono review from prestigious academics, permission to publish reviews by reviewers (with or without identification), but most importantly it will require reader review.

    This could range from something as simple as 5-star checking for online readers (how was your stay at our hotel?) to as complex as (moderated) notes in the electronic margin and a question and answer section at the end. The opportunity to create a new kind of paper that assumes the status of a living document might solve more than a commercial problem.

  5. (moderated) notes in the electronic margin and a question and answer section at the end. The opportunity to create a new kind of paper that assumes the status of a living document might solve more than a commercial problem.

    Great!

    I've been dreaming of (peer-reviewed) electronic marginalia for quite some time. In medieval libraries, the marginalia were the most important (and voluminous) treasure. When I was at university I had no scruples to add comments or corrections to books (erasable pencil of course).


Leave a Reply

Your email address will not be published.