How to review a scientific paper

30 09 2014

F6a00d834521baf69e200e55471d80f8833-800wiollowing one of the most popular posts on ConservationBytes.com, as well as in response to several requests, I’ve decided to provide a few pointers for early-career scientists for reviewing manuscripts submitted to peer-reviewed journals.

Apart from publishing your first peer-reviewed paper – whether it’s in Nature or Corey’s Journal of Bullshit – receiving that first request to review a manuscript is one of the best indications that you’ve finally ‘made it’ as a recognised scientist. Finally, someone is acknowledging that you are an expert and that your opinions and critiques are important. You deserve to feel proud when this happens.

Of course, reviewing is the backbone of the scientific process, because it is the main component of science’s pursuit of objectivity (i.e., subjectivity reduction). No other human endeavour can claim likewise.

It is therefore essential to take the reviewing process seriously, even if you do so only from the entirely selfish perspective that if you do not, no one will seriously review your own work. It is therefore much more than an altruistic effort to advance human knowledge – it is at the very least a survival mechanism. Sooner or later if you get a reputation for providing bad reviews, or refuse to do them, your own publication track record will suffer as a result.

Just like there are probably as many different (successful) ways to write a scientific paper as there are journals, most people develop their own approaches for reviewing their colleagues’ work. But just as it’s my opinion that many journal editors do an awful job of editing, I know that many reviewers do rather a shit job at their assigned tasks. This perspective comes from many years as an author, a reviewer, an editor and a mentor.

So take my advice as you will – hopefully some of it will prove useful when you review manuscripts. Read the rest of this entry »





Attention Ecologists: Journal Ranking Survey

16 09 2014

journal rankingIn the interest of providing greater transparency when ranking the ‘quality’ of scientific journals, we are interested in collecting ecologists’ views on the relative impact of different ecology, conservation and multidisciplinary journals. If you’re a publishing ecologist, we want your personal opinion on a journal’s relative rank from this sample of 25 peer-reviewed journals. Please do not consult Impact Factors or other journal rankings to decide – just go with your ‘gut’ feeling.

We chose a sample of 25 authoritative journals in the field (listed below alphabetically). Use the drop-down menus to select a categorical rank. Make sure you’ve allocated categories 1 through to 4 at least once in the sample of 25. Category 5 (‘Other’) is optional.

The survey should take you only a few minutes to complete. Thanks for your time!





A fairer way to rank conservation and ecology journals in 2014

1 08 2014

Normally I just report the Thomson-Reuters ISI Web of Knowledge Impact Factors for conservation-orientated journals each year, with some commentary on the rankings of other journals that also publish conservation-related material from time to time (see my lists of the 2008200920102011 and 2012 Impact Factor rankings).

This year, however, I’m doing something different given the growing negativity towards Thomson-Reuters’ secretive behaviour (which they’ve promised this year to rectify by being more transparent) and the generally poor indication of quality that the Impact Factor represents. Although the 2013 Impact Factors have just been released (very late this year, for some reason), I’m going to compare them to the increasingly reputable Google Scholar Journal Metrics, which intuitively make more sense to me, are transparent and turn a little of the rankings dogma on its ear.

In addition to providing both the Google metric as well as the Impact Factor rankings, I’ve come up with a composite (average) rank from the two systems. I think ranks are potentially more useful than raw corrected citation metrics because you must first explicitly set your set of journals to compare. I also go one step further and modify the average ranking with a penalty term that is essentially the addition of the coefficient of variation of rank disparity between the two systems.

Read on for the results.

Read the rest of this entry »





Be a good reviewer, but be a better editor

6 06 2014
© evileditor.blogspot.com.au

© evileditor.blogspot.com.au

Perhaps it’s just that I’ve been at this for a while, or maybe it’s a real trend. Regardless, many of my colleagues and I are now of the opinion that the quality of editing in scientific journals is on the downhill slide.

Yes – we (scientists) all complain about negative decisions from journals to which we’ve submitted our work. Being rejected is part of the process. Aiming high is necessary for academic success, but when a negative decision is made on the basis of (often one) appalling review, it’s a little harder to swallow.

I suppose I can accept the inevitability of declining review quality for the simple reason that there are now SO MANY papers to review that finding willing volunteers is difficult. This means that there will always be people who only glance cursorily at the paper, miss the detail and recommend rejection based on their own misunderstanding or bias. It’s far easier to skim a paper and try to find a reason to reject than actually putting in the time to appraise the work critically and fairly.

This means that the traditional model of basing the decision to accept or reject a manuscript on only two reviews is fraught because the probability of receiving poor reviews is rising. For example, a certain undisclosed journal of unquestionably high quality for which I edit does not accept anything less than six recommendations for reviewers per manuscript, and none that I’m aware of is accepted or rejected based on only two reviews. But I think this is the exception rather than the rule – there are simply too many journals now of low to medium quality to be able to get that many reviewers to agree to review.

I won’t spend too much time trying to encourage you to do the best job you can when reviewing – that should go without saying. Remember what goes around comes around. If you are a shit reviewer, you will receive shit reviews. Read the rest of this entry »





Scientists should blog

27 05 2014
© Bill Porter

© Bill Porter

As ConservationBytes.com is about to tick over 1 million hits since its inception in mid-2008, I thought I’d share why I think more scientists should blog about their work and interests.

As many of you know, I regularly give talks and short courses on the value of social and other media for scientists; in fact, my next planned ‘workshop’ (Make Your Science Matter) on this and related subjects will be held at the Ecological Society of Australia‘s Annual Conference in Alice Springs later this year.

I’ve written before about the importance of having a vibrant, attractive and up-to-date online profile (along with plenty of other tips), but I don’t think I’ve ever put down my thoughts on blogging in particular. So here goes.

  1. The main reasons scientists should consider blogging is the hard, cold fact that not nearly enough people read scientific papers. Most scientists are lucky if a few of their papers ever top 100 citations, and I’d wager that most are read by only a handful of specialists (there are exceptions, of course, but these are rare). If you’re a scientist, I don’t have to tell you the disappointment of realising that the blood, sweat and tears shed over each and every paper is largely for nought considering just how few people will ever read our hard-won results. It’s simply too depressing to contemplate, especially considering that the sum of human knowledge is so vast and expanding that this trend will only ever get worse. For those reasons alone, blogging about your own work widens the readership by orders of magnitude. More people read my blog every day than will probably ever read the majority of my papers. Read the rest of this entry »




Hate journal impact factors? Try Google rankings instead

18 11 2013

pecking orderA lot of people hate journal impact factors (IF). The hatred arises for many reasons, some of which are logical. For example, Thomson Reuters ISI Web of Knowledge® keeps the process fairly opaque, so it’s sometimes difficult to tell if journals are fairly ranked. Others hate IF because it does not adequately rank papers within or among sub disciplines. Still others hate the idea that citations should have anything to do with science quality (debatable, in my view). Whatever your reason though, IF are more or less here to stay.

Yes, individual scientists shouldn’t be ranked based only on the IF of the journals in which they publish; there are decent alternatives such as the h-index (which can grow even after you die), or even better, the m-index (or m-quotient; think of the latter as a rate of citation accumulation). Others would rather ditch the whole citation thing altogether and measure some element of ‘impact’, although that elusive little beast has yet to be captured and applied objectively.

So just in case you haven’t already seen it, Google has recently put its journal-ranking hat in the ring with its journal metrics. Having firmly wrested the cumbersome (and expensive) personal citation accumulators from ISI and Scopus (for example) with their very popular (and free!) Google Scholar (which, as I’ve said before, all researchers should set-up and make available), they now seem poised to do the same for journal rankings.

So for your viewing and arguing pleasure, here are the ‘top’ 20 journals in Biodiversity and Conservation Biology according to Google’s h5-index (the h-index for articles published in that journal in the last 5 complete years; it is the largest number h such that h articles published in 2008-2012 have at least h citations each):

Read the rest of this entry »





Making the scientific workshop work

28 10 2013
I don't mean this

I don’t mean this

I’ve been a little delayed in blogging this month, but for a very good reason – I’ve just experienced one of the best workshops of my career. I’d like to share a little of that perfect science recipe with you now.

I’ve said it before, but it can stand being repeated: done right, workshops can be some of the most efficient structures for doing big science.

First, let me define ‘workshop’ for those of you who might have only a vague notion of what it entails. To me, a workshop is a small group of like-minded scientists – all of whom possess different skills and specialities – who are brought together to achieve one goal. That goal is writing the superlative manuscript for publication.

So I don’t mean just a bog-standard chin-wag infected with motherhoods and diatribes. Workshops are not mini-conferences; neither are they soap boxes. It is my personal view that nothing can waste a scientist’s precious time more than an ill-planned and aimless workshop.

But with a little planning and some key ingredients that I’ll list shortly, you can turn a moderately good idea into something that can potentially shake the foundations of an entire discipline. So what are these secret ingredients? Read the rest of this entry »








Follow

Get every new post delivered to your Inbox.

Join 6,535 other followers

%d bloggers like this: