Perhaps it’s just that I’ve been at this for a while, or maybe it’s a real trend. Regardless, many of my colleagues and I are now of the opinion that the quality of editing in scientific journals is on the downhill slide.
Yes – we (scientists) all complain about negative decisions from journals to which we’ve submitted our work. Being rejected is part of the process. Aiming high is necessary for academic success, but when a negative decision is made on the basis of (often one) appalling review, it’s a little harder to swallow.
I suppose I can accept the inevitability of declining review quality for the simple reason that there are now SO MANY papers to review that finding willing volunteers is difficult. This means that there will always be people who only glance cursorily at the paper, miss the detail and recommend rejection based on their own misunderstanding or bias. It’s far easier to skim a paper and try to find a reason to reject than actually putting in the time to appraise the work critically and fairly.
This means that the traditional model of basing the decision to accept or reject a manuscript on only two reviews is fraught because the probability of receiving poor reviews is rising. For example, a certain undisclosed journal of unquestionably high quality for which I edit does not accept anything less than six recommendations for reviewers per manuscript, and none that I’m aware of is accepted or rejected based on only two reviews. But I think this is the exception rather than the rule – there are simply too many journals now of low to medium quality to be able to get that many reviewers to agree to review.
I won’t spend too much time trying to encourage you to do the best job you can when reviewing – that should go without saying. Remember what goes around comes around. If you are a shit reviewer, you will receive shit reviews.
As you move up the career ladder, however, you will be asked more frequently to become part of a journal’s editorial board. This is a noble thing to do in its own right, but it also gives you much more insight into the publication process. While time-consuming, it’s generally worth the effort (to a degree, of course).
If you become or already are an editor, then this post is for you. What should you do about bad reviews?
The first thing to remember is that you are not merely an administrator – you are, in essence, the overseer/gatekeeper/high priest of scientific integrity. If you fail, science fails. This means that you actually have to read the paper – in its entirety. It is not acceptable to skim the abstract and make your decision based only on the reviewers’ comments. If you don’t read the paper, how can you possible judge the quality of the reviews assessing it?
Nearly all journals for which I have edited or am currently editing have sent around annual e-mails to the editorial board that go something like this:
“The Journal of XXXX has had an X-fold increase in the number of submissions over the last X months. We therefore strongly encourage you to be extremely critical of what you let go to review. If a paper is not within the top X% of all articles in the journal, you should consider outright rejection instead of sending to review.”
Sounds harsh, I know, but it is the sentiment behind the ubiquitous “space is at a premium in our journal, so we cannot accept all articles regardless of their merit” that you’ll get when you receive that rejection e-mail. Of course, this is mostly bullshit – space is definitely no longer at a premium as the entire world shifts to online publication. There are no space restrictions on the internet. The main reasons this excuse is given are to (i) maximise profits for the publishing company, (ii) reduce the workload for largely volunteer editorial staff and (iii) increase the journal’s impact factor.
I’ll let you decide on the morality of this reality, but I need to make it clear that editors are now increasingly directed from on high to reject papers as often as they can. It is therefore understandable that when a negative review comes in, the easiest thing to do is just accept it at face value and hit the ‘reject’ button.
But please resist that temptation. A couple of pointers about how to recognise a bad review (both unfairly negative or suspiciously supportive ones) are in order:
- If a review is only a paragraph long, be very, very suspicious of its quality and objectivity. Unless it’s a review from someone I trust absolutely to give an educated and objective opinion, I nearly always dismiss it.
- Likewise, if the reviewer is clearly adversarial, or appears to take offence at the audacity of the authors even to write such rubbish, then take the review with a big shaker of salt.
- Always, always, always check the collaborative relationship between a reviewer and the authors. Good mates tend to overlook major faults.
- If the review focuses almost exclusively on ‘lack of novelty’ without critiquing the methods or results, perhaps give the authors a little more benefit of the doubt. Novelty is a poisoned chalice – don’t fall into the trap of thinking that all papers have to be absolutely, ground-shakingly and Earth-shatteringly novel. True novelty is very rare.
After deciding on whether a review should be believed in its entirety or dismissed as a poor-quality and subjective tirade, then the next thing to do is weigh the evidence for or against. If you have one high-quality review (whether negative or positive) and one poor-quality review, you simply cannot make an informed decision. Of course, add in your own appraisal, but I would always recommend obtaining at least one more (high-quality) review before deciding which way to go. This will of course extend the time to publication, but to me this is far more important than rejecting a really good paper based on a shaky assessment, or worse, accepting one that is fundamentally flawed.
So being an editor means that you have to edit, and do it well or don’t do it at all. It’s an essential component of the greater process of scientific knowledge generation.