Learning how to fail

6 06 2013

On the way to work yesterday I was listening to ABC Radio National‘s Life Matters program hosted by Natasha Mitchell about how school children are now apparently being given so much positive praise and encouragement that they can no longer handle failure. Poor, wee dears. Maybe that’s why we have such a high attrition rate once they get up to postgraduate level, because that’s when they REALLY experience failure.

Jokes and whinges aside, there is a hard truth in that message that applies to all scientists, and especially the early-career ones. I’m talking about having your paper rejected from a journal.

Even the terms we use to describe the peer-review gauntlet appear designed to instil fear and inadequacy: reject or accept. I don’t know how many times I’ve seen a PhD student’s face figuratively melt off the skull as they shuffle into my office to show me the journal’s rejection letter (now just usually forwarded in an email accompanied by implied stooped shoulders – is there an emoticon for that?). As I’ve mentioned before, we scientists can be real bastards to each other, and it comes out in spades during peer review.

While neophytes tend to take these hits the hardest, I want to impart a little wisdom from some of my very well-established and successful colleagues. Rejection should be viewed as an asset, not a mark of failure. Let me explain.

No one, no matter how experienced, likes to have a paper rejected. Humans hate to be on the receiving end of a criticism, and scientists are no different. Many reviews can be harsh and unfair; many reviewers ‘miss the point’ or are just plain nasty. It could be argued that some reviewers even get off by stomping on others. However, I and many of my colleagues argue that if you’re NOT getting rejected, you’re not trying hard enough.

We have an unwritten policy in our lab: a manuscript should be prepared with the highest target in mind, but then submitted to a journal about two impact levels above that target. I’m not suggesting that every paper we write gets sent initially to Nature or Science, for that would essentially guarantee almost always getting rejected, but I think you get the picture. For example, if I have a good, local-interest paper that would sit nicely in, say, Wildlife Research, but I think with a little finesse I could get it into Biological Conservation or  Conservation Biology, I’ll probably instead submit it to Ecology or even Ecology Letters first (within reason and theme, of course). We pretty much apply this rule across the journal impact gradient.

You can imagine what this policy engenders: lots and lots of rejections. Sometimes we get it right and get a paper through the door and past the gauntlet of the highest-impact journals, but many times we fail. This means that for every one of the 200-odd papers I’ve published with my colleagues, we’ve had about an average of 2-3 (and sometimes many more) rejections. That means that I can boast over 500 rejections in my career thus far! I wear this like a proud battle scar.

The take-home message is that rejection is not an indication of a scientist’s worth or capacity, it’s merely an indication that you’re shooting high and want to succeed (provided you follow through and eventually get the work published somewhere, of course).

Happy rejections.

CJA Bradshaw


Actions

Information

8 responses

6 06 2014
Be a good reviewer, but be a better editor | ConservationBytes.com

[…] all complain about negative decisions from journals to which we’ve submitted our work. Being rejected is part of the processes. Aiming high is necessary for academic success, but when a negative decision is made on the basis […]

7 06 2013
Franck Courchamp

You should see some of my battle scars Corey :-)
Two of my current PhD (both really brilliant) are getting the highest rates of rejections I’ve ever had with my students. But it’s because they produce the best work ever, and we aim at the best journals (which reject papers often, even when they are fitting).
When another rejections comes up, I try to remind them that great researchers are those who can rise again after a blow.

7 06 2013
Anna Brown

I like this. This is not aiming for rejection this is aiming for improvement. The fastest way to improve is to be challenged and take risks. To many scientists this does not come naturally but can and should be a learned behaviour.

7 06 2013
marioquevedo

I like what seems to be the core message, i.e., “rejection, so what, name of the game, make it better”. Then I agree less on the mentioned “shoot higher than” strategy, but of course it seems to be working nicely.

What got my attention was a minor portion of the text: “with a little finesse I could get it into…”. First I had to look up the precise meaning of “finesse”. And still I do not quite get how and, more important, why, should one make a study of regional interest look as general. Specially because I figure that for any given paper the local / global thing is a trade/off.

I mean, I guess I have tried that finesse thing myself sometimes in the past but, aren’t we just twisting the play?

Anyway, thanks for taking the time to write all that stuff openly.

/Mario

6 06 2013
CJAB

I think quite a few people misunderstood my post here, which isn’t all that surprising considering how the impact-factor game pisses so many people off (me included), how reviewers are never remunerated for their time, and how the publishing companies are screwing us.

To clarify, I am currently an editor for three high-impact journals: Ecology Letters, Frontiers in Ecology and the Environment and Journal of Animal Ecology (and I used to edit for Conservation Letters and Biotropica). Believe me, I do not like to have my time ‘wasted’ (as some of the ruder tweeps are accusing me of promoting with my post). However, if a paper clearly isn’t worthy of the journal in question, or is thematically inappropriate, it’s a quick and effortless process to reject before review.

In other words, I’m not promoting a stupid, willy-nilly Nature-submission mantra for every paper (as I stated) – my point was to aim high and not be distressed too much when you get rejected. That’s it.

6 06 2013
Michael McCarthy

A few points:

1) I think it is great to point out to ECRs that rejection is not the end of the world and that it happens to everyone. This reflects the idea of building a shadow cv: http://contemplativemammoth.wordpress.com/2012/06/08/building-a-shadow-cv/.That has benefits in all sorts of ways.

2) Also, it seems that papers that push the boundaries of conventional wisdom are harder to publish than those that are mainstream. So rejection can result from research that is actually “too novel”.

3) However, research that is “too novel” is rare, so one should rarely take rejection as a compliment. Rejections usually occur because of a flawed design, flawed logic, etc, or lack of novelty. While rejection due to lack of novelty has problems, there are practicalities to consider. For example, the journal Conservation Biology might easily be filled (and then some) with papers on PVA models, fragmentation studies, and species distribution models of each threatened species of the world if there were no bar set for “novelty”. While avenues for publishing such studies are important and valuable (e.g., PLoS One), that won’t work for paper-based journals with limits on pages (whether such publishing models are sustainable is another issue).

Regardless, the peer-review process is valuable, even though flawed. Every paper that I have had rejected has been improved by the reviewer comments; often those improvements are simply matters of clarification, but clarity and precise language is critical in science. So an important point is that it is not just a matter of “Great, I’ve been rejected by a prestigious journal! Now, which one is next in line?”, it is important to revise on the basis of those comments – they are a sample of potential readers, at least some of whom will respond similarly.

But I have a couple of concerns about this post:

4) The game of aiming for numerous rejections establishes an untenable culture. Firstly, the peer review system cannot cope with that much churning of papers. It overloads reviewers and editors alike, meaning they have less time to do their job – that must lead to poorer outcomes for peer review. Fine, we could go for post-publication peer review, but frankly I like the idea that a few people have looked at a paper already, helped the authors improve it, and then given it the thumbs up. I think that saves me time from wading though a pile of dross.

5) The focus on Impact Factor as a measure of quality is extremely damaging and misguided. Such a game might be required by some, but it really should be resisted and protested. It is also crushing. I know a post-doc who had published ~30 papers in the previous ~2.5 years FTE but was unable to get an ARC DECRA. One of the reviewers noted that she didn’t have any papers in “high” IF journals (she did have papers in Ecology Letters and good conservation journals, so that comment must have meant no papers in Science, Nature or PNAS). That is a very narrow set of journals to aim at (and it is not as though those journals don’t publish absolute rubbish from time to time!). That comment of the reviewer was the extent of the critique of the quality of her publications – it was mindless – and damaging. She is one of the most independent and productive post-docs I have known, and she missed out on a DECRA in part because of a misguided reviewer’s attitude to Impact Factors. That comment did more damage than any rejection from a journal. So while it might be fair enough to warn early career researchers about the publishing game and that some ill-informed people will equate journal IF with quality of an article, I’m very concerned that we also tell them how mindless such an attitude is, and make sure they don’t judge quality in the same way. We should be telling them “To assess the quality of a paper, read it, think about, read associated papers, then think some more.”

Wow – sorry about the rant!

Mick

6 06 2013
Phillip Lord

This depressing thinking slows down science enormously. In the time that you are sending your papers backward and forward no one is reading them. Worse still, it panders to the silly idea that journals are any use at all as an arbiter of quality. Worse still, it is this kind of thinking that results in the publication bias which reduces the value of science overall.

My suggestion; before you start down this crazy path of bouncing papers backward and forward, at least put them put them onto http://arxiv.org first.

Peer review is what happens after you publish, and this is worth it’s weight in gold. Rejection gives nothing other than delay.

6 06 2013
CJAB

From Andrew Smith, The University of Adelaide:

You hit nails on the head – I agree that rejection by a journal is no disgrace and it’s a matter of using referees’ reports to try again (several times if need be). An important issue that can alarm new authors is the ‘reject but encourage resubmission’ decision, which has replaced ‘major revision required’ for some journals. What this does is to shorten apparent time from submission to acceptance as the ‘new’ submission, often accepted quickly, gets a new date. Clever – but rather bogus as far as the author is concerned.

Then there’s the issue of increasing obsession with trying to get papers into Nature & Science, e.g. as an indicator of ‘esteem’. This is increasingly encouraged by some Universities for their own ‘esteem'; it is very widespread in China, as I well know from my experience there. These journals do not publish articles evenly across the whole spectrum of science, and even if they did there is the lottery factor of getting beyond the editor’s desk, where the editor is looking for very ‘newsworthy’ articles.

This leads on to using success rate in high-impact-factor journals to evaluate individual scientists, which is very questionable indeed, especially across disciplines.

These issues have recently been discussed by no less than the Editor-in-Chief of Science in this editorial – you should find it interesting!

Professor Andrew (F.A.) Smith

School of Agriculture, Food & Wine
Waite Campus DX 650 636
The University of Adelaide, AUSTRALIA 5005

Fellow of the Australian Academy of Science
Honorary Professor, Research Center for EcoEnvironmental Sciences, Chinese Academy of Sciences, Beijing
Visiting Professor, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen
Fellow, Borneo Research Council

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow

Get every new post delivered to your Inbox.

Join 6,028 other followers

%d bloggers like this: