Journal ranks 2016

14 07 2017

Many books

Last year we wrote a bibliometric paper describing a new way to rank journals, which I contend is a fairer representation of relative citation-based rankings by combining existing ones (e.g., ISI, Google Scholar and Scopus) into a composite rank. So, here are the 2016 ranks for (i) 93 ecology, conservation and multidisciplinary journals, and a subset of (ii) 46 ecology journals, (iii) 21 conservation journals, just as I have done in previous years (201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





Journal ranks 2015

26 07 2016

graduate_barsBack in February I wrote about our new bibliometric paper describing a new way to rank journals, which I still contend is a fairer representation of relative citation-based rankings. Given that the technique requires ISI, Google Scholar and Scopus data to calculate the composite ranks, I had to wait for the last straggler (Google) to publish the 2015 values before I could present this year’s rankings to you. Google has finally done that.

So in what has become a bit of an annual tradition, I’m publishing the ranks of a mixed list of ecology, conservation and multidisciplinary disciplines that probably cover most of the journals you might be interested in comparing. Like for last year, I make no claims that this list is comprehensive or representative. For previous lists based on ISI Impact Factors (except 2014), see the following links (2008, 2009, 2010, 2011, 2012, 2013).

So here are the following rankings of (i) 84 ecology, conservation and multidisciplinary journals, and a subset of (ii) 42 ecology journals, (iii) 21 conservation journals, and (iv) 12 marine and freshwater journals. Read the rest of this entry »





How to rank journals

18 02 2016

ranking… properly, or at least ‘better’.

In the past I have provided ranked lists of journals in conservation ecology according to their ISI® Impact Factor (see lists for 2008, 2009, 2010, 2011, 2012 & 2013). These lists have proven to be exceedingly popular.

Why are journal metrics and the rankings they imply so in-demand? Despite many people loathing the entire concept of citation-based journal metrics, we scientists, our administrators, granting agencies, award committees and promotion panellists use them with such merciless frequency that our academic fates are intimately bound to the ‘quality’ of the journals in which we publish.

Human beings love to rank themselves and others, the things they make, and the institutions to which they belong, so it’s a natural expectation that scientific journals are ranked as well.

I’m certainly not the first to suggest that journal quality cannot be fully captured by some formulation of the number of citations its papers receive; ‘quality’ is an elusive characteristic that includes inter alia things like speed of publication, fairness of the review process, prevalence of gate-keeping, reputation of the editors, writing style, within-discipline reputation, longevity, cost, specialisation, open-access options and even its ‘look’.

It would be impossible to include all of these aspects into a single ‘quality’ metric, although one could conceivably rank journals according to one or several of those features. ‘Reputation’ is perhaps the most quantitative characteristic when measured as citations, so we academics have chosen the lowest-hanging fruit and built our quality-ranking universe around them, for better or worse.

I was never really satisfied with metrics like black-box Impact Factors, so when I started discovering other ways to express the citation performance of the journals to which I regularly submit papers, I became a little more interested in the field of bibliometrics.

In 2014 I wrote a post about what I thought was a fairer way to judge peer-reviewed journal ‘quality’ than the default option of relying solely on ISI® Impact Factors. I was particularly interested in why the new kid on the block — Google Scholar Metrics — gave at times rather wildly different ranks of the journals in which I was interested.

So I came up with a simple mean ranking method to get some idea of the relative citation-based ‘quality’ of these journals.

It was a bit of a laugh, really, but my long-time collaborator, Barry Brook, suggested that I formalise the approach and include a wider array of citation-based metrics in the mean ranks.

Because Barry’s ideas are usually rather good, I followed his advice and together we constructed a more comprehensive, although still decidedly simple, approach to estimate the relative ranks of journals from any selection one would care to cobble together. In this case, however, we also included a rank-placement resampler to estimate the uncertainty associated with each rank.

I’m pleased to announce that the final version1 is now published in PLoS One2. Read the rest of this entry »





Who are the world’s biggest environmental reprobates?

5 05 2010

Everyone is a at least a little competitive, and when it comes to international relations, there could be no higher incentive for trying to do better than your neighbours than a bit of nationalism (just think of the Olympics).

We rank the world’s countries for pretty much everything, relative wealth, health, governance quality and even happiness. There are also many, many different types of ‘environmental’ indices ranking countries. Some attempt to get at that nebulous concept of ‘sustainability’, some incorporate human health indices, and other are just plain black box (see Böhringer et al. 2007 for a review).

With that in mind, we have just published a robust (i.e., to missing data, choices for thresholds, etc.), readily quantifiable (data available for most countries) and objective (no arbitrary weighting systems) index of a country’s relative environmental impact that focuses ONLY on environment (i.e., not human health or economic indicators) – something no other metric does. We also looked at indices relative to opportunity – that is, looking at how much each country has degraded relative to what it had to start with.

We used the following metrics to create a combined environmental impact rank: natural forest loss, habitat conversion, fisheries and other marine captures, fertiliser use, water pollution, carbon emissions from land-use change and threatened species.

The paper, entitled Evaluating the relative environmental impact of countries was just published in the open-access journal PLoS One with my colleagues Navjot Sodhi of the National University of Singapore (NUS) and Xingli Giam, formerly of NUS but now at Princeton University in the USA.

So who were the worst? Relative to resource availability (i.e,. how much forest area, coastline, water, arable land, species, etc. each country has), the proportional environmental impact ranked (from worst) the following ten countries:

  1. Singapore
  2. Korea
  3. Qatar
  4. Kuwait
  5. Japan
  6. Thailand
  7. Bahrain
  8. Malaysia
  9. Philippines
  10. Netherlands

When considering just the absolute impact (i.e., not controlling for resource availability), the worst ten were:

  1. Brazil
  2. USA
  3. China
  4. Indonesia
  5. Japan
  6. Mexico
  7. India
  8. Russia
  9. Australia
  10. Peru

Interestingly (and quite unexpectedly), the authors’ home countries (Singapore, Australia, USA) were in either the worst ten proportional or absolute ranks. Embarrassing, really (for a full list of all countries, see supporting information). Read the rest of this entry »