Journal ranks 2016

14 07 2017

Many books

Last year we wrote a bibliometric paper describing a new way to rank journals, which I contend is a fairer representation of relative citation-based rankings by combining existing ones (e.g., ISI, Google Scholar and Scopus) into a composite rank. So, here are the 2016 ranks for (i) 93 ecology, conservation and multidisciplinary journals, and a subset of (ii) 46 ecology journals, (iii) 21 conservation journals, just as I have done in previous years (201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





Journal ranks 2015

26 07 2016

graduate_barsBack in February I wrote about our new bibliometric paper describing a new way to rank journals, which I still contend is a fairer representation of relative citation-based rankings. Given that the technique requires ISI, Google Scholar and Scopus data to calculate the composite ranks, I had to wait for the last straggler (Google) to publish the 2015 values before I could present this year’s rankings to you. Google has finally done that.

So in what has become a bit of an annual tradition, I’m publishing the ranks of a mixed list of ecology, conservation and multidisciplinary disciplines that probably cover most of the journals you might be interested in comparing. Like for last year, I make no claims that this list is comprehensive or representative. For previous lists based on ISI Impact Factors (except 2014), see the following links (2008, 2009, 2010, 2011, 2012, 2013).

So here are the following rankings of (i) 84 ecology, conservation and multidisciplinary journals, and a subset of (ii) 42 ecology journals, (iii) 21 conservation journals, and (iv) 12 marine and freshwater journals. Read the rest of this entry »





How to rank journals

18 02 2016

ranking… properly, or at least ‘better’.

In the past I have provided ranked lists of journals in conservation ecology according to their ISI® Impact Factor (see lists for 2008, 2009, 2010, 2011, 2012 & 2013). These lists have proven to be exceedingly popular.

Why are journal metrics and the rankings they imply so in-demand? Despite many people loathing the entire concept of citation-based journal metrics, we scientists, our administrators, granting agencies, award committees and promotion panellists use them with such merciless frequency that our academic fates are intimately bound to the ‘quality’ of the journals in which we publish.

Human beings love to rank themselves and others, the things they make, and the institutions to which they belong, so it’s a natural expectation that scientific journals are ranked as well.

I’m certainly not the first to suggest that journal quality cannot be fully captured by some formulation of the number of citations its papers receive; ‘quality’ is an elusive characteristic that includes inter alia things like speed of publication, fairness of the review process, prevalence of gate-keeping, reputation of the editors, writing style, within-discipline reputation, longevity, cost, specialisation, open-access options and even its ‘look’.

It would be impossible to include all of these aspects into a single ‘quality’ metric, although one could conceivably rank journals according to one or several of those features. ‘Reputation’ is perhaps the most quantitative characteristic when measured as citations, so we academics have chosen the lowest-hanging fruit and built our quality-ranking universe around them, for better or worse.

I was never really satisfied with metrics like black-box Impact Factors, so when I started discovering other ways to express the citation performance of the journals to which I regularly submit papers, I became a little more interested in the field of bibliometrics.

In 2014 I wrote a post about what I thought was a fairer way to judge peer-reviewed journal ‘quality’ than the default option of relying solely on ISI® Impact Factors. I was particularly interested in why the new kid on the block — Google Scholar Metrics — gave at times rather wildly different ranks of the journals in which I was interested.

So I came up with a simple mean ranking method to get some idea of the relative citation-based ‘quality’ of these journals.

It was a bit of a laugh, really, but my long-time collaborator, Barry Brook, suggested that I formalise the approach and include a wider array of citation-based metrics in the mean ranks.

Because Barry’s ideas are usually rather good, I followed his advice and together we constructed a more comprehensive, although still decidedly simple, approach to estimate the relative ranks of journals from any selection one would care to cobble together. In this case, however, we also included a rank-placement resampler to estimate the uncertainty associated with each rank.

I’m pleased to announce that the final version1 is now published in PLoS One2. Read the rest of this entry »





A fairer way to rank conservation and ecology journals in 2014

1 08 2014

Normally I just report the Thomson-Reuters ISI Web of Knowledge Impact Factors for conservation-orientated journals each year, with some commentary on the rankings of other journals that also publish conservation-related material from time to time (see my lists of the 2008200920102011 and 2012 Impact Factor rankings).

This year, however, I’m doing something different given the growing negativity towards Thomson-Reuters’ secretive behaviour (which they’ve promised this year to rectify by being more transparent) and the generally poor indication of quality that the Impact Factor represents. Although the 2013 Impact Factors have just been released (very late this year, for some reason), I’m going to compare them to the increasingly reputable Google Scholar Journal Metrics, which intuitively make more sense to me, are transparent and turn a little of the rankings dogma on its ear.

In addition to providing both the Google metric as well as the Impact Factor rankings, I’ve come up with a composite (average) rank from the two systems. I think ranks are potentially more useful than raw corrected citation metrics because you must first explicitly set your set of journals to compare. I also go one step further and modify the average ranking with a penalty term that is essentially the addition of the coefficient of variation of rank disparity between the two systems.

Read on for the results.

Read the rest of this entry »





Hate journal impact factors? Try Google rankings instead

18 11 2013

pecking orderA lot of people hate journal impact factors (IF). The hatred arises for many reasons, some of which are logical. For example, Thomson Reuters ISI Web of Knowledge® keeps the process fairly opaque, so it’s sometimes difficult to tell if journals are fairly ranked. Others hate IF because it does not adequately rank papers within or among sub disciplines. Still others hate the idea that citations should have anything to do with science quality (debatable, in my view). Whatever your reason though, IF are more or less here to stay.

Yes, individual scientists shouldn’t be ranked based only on the IF of the journals in which they publish; there are decent alternatives such as the h-index (which can grow even after you die), or even better, the m-index (or m-quotient; think of the latter as a rate of citation accumulation). Others would rather ditch the whole citation thing altogether and measure some element of ‘impact’, although that elusive little beast has yet to be captured and applied objectively.

So just in case you haven’t already seen it, Google has recently put its journal-ranking hat in the ring with its journal metrics. Having firmly wrested the cumbersome (and expensive) personal citation accumulators from ISI and Scopus (for example) with their very popular (and free!) Google Scholar (which, as I’ve said before, all researchers should set-up and make available), they now seem poised to do the same for journal rankings.

So for your viewing and arguing pleasure, here are the ‘top’ 20 journals in Biodiversity and Conservation Biology according to Google’s h5-index (the h-index for articles published in that journal in the last 5 complete years; it is the largest number h such that h articles published in 2008-2012 have at least h citations each):

Read the rest of this entry »





Conservation and ecology journal Impact Factors 2012

20 06 2013

smack2It’s the time of year that scientists love to hate – the latest (2012) journal ranking have been released by ISI Web of Knowledge. Many people despise this system, despite its major role in driving publishing trends.

I’ve previously listed the 2008, 2009, 2010 and 2011 IF for major conservation and ecology journals. As before, I’ve included the previous year’s IF alongside the latest values to see how journals have improved or worsened (but take note – journals increase their IF on average anyway merely by the fact that publication frequency is increasing, so small jumps aren’t necessarily meaningful; I suspect that declines are therefore more telling).

Read the rest of this entry »





Conservation and Ecology Impact Factors 2011

29 06 2012

Here we go – another year, another set of citations, and another journal ranking by ISI Web of Knowledge Journal Citation Reports. Love them or loathe them, Impact Factors (IF) are immensely important for dictating publication trends. No, a high Impact Factor doesn’t mean your paper will receive hundreds of citations, but the two are correlated.

I’ve previously listed the 2008, 2009 and 2010 IF for major conservation and ecology journals – now here are the 2011 IF fresh off the press (so to speak). I’ve included the 2010 alongside to see how journals have improved or worsened (but take note – journals increase their IF on average anyway merely by the fact that publication frequency is increasing, so small jumps aren’t necessarily meaningful).

Read the rest of this entry »