Journal ranks 2015

26 07 2016

graduate_barsBack in February I wrote about our new bibliometric paper describing a new way to rank journals, which I still contend is a fairer representation of relative citation-based rankings. Given that the technique requires ISI, Google Scholar and Scopus data to calculate the composite ranks, I had to wait for the last straggler (Google) to publish the 2015 values before I could present this year’s rankings to you. Google has finally done that.

So in what has become a bit of an annual tradition, I’m publishing the ranks of a mixed list of ecology, conservation and multidisciplinary disciplines that probably cover most of the journals you might be interested in comparing. Like for last year, I make no claims that this list is comprehensive or representative. For previous lists based on ISI Impact Factors (except 2014), see the following links (2008, 2009, 2010, 2011, 2012, 2013).

So here are the following rankings of (i) 84 ecology, conservation and multidisciplinary journals, and a subset of (ii) 42 ecology journals, (iii) 21 conservation journals, and (iv) 12 marine and freshwater journals. Read the rest of this entry »

How to rank journals

18 02 2016

ranking… properly, or at least ‘better’.

In the past I have provided ranked lists of journals in conservation ecology according to their ISI® Impact Factor (see lists for 2008, 2009, 2010, 2011, 2012 & 2013). These lists have proven to be exceedingly popular.

Why are journal metrics and the rankings they imply so in-demand? Despite many people loathing the entire concept of citation-based journal metrics, we scientists, our administrators, granting agencies, award committees and promotion panellists use them with such merciless frequency that our academic fates are intimately bound to the ‘quality’ of the journals in which we publish.

Human beings love to rank themselves and others, the things they make, and the institutions to which they belong, so it’s a natural expectation that scientific journals are ranked as well.

I’m certainly not the first to suggest that journal quality cannot be fully captured by some formulation of the number of citations its papers receive; ‘quality’ is an elusive characteristic that includes inter alia things like speed of publication, fairness of the review process, prevalence of gate-keeping, reputation of the editors, writing style, within-discipline reputation, longevity, cost, specialisation, open-access options and even its ‘look’.

It would be impossible to include all of these aspects into a single ‘quality’ metric, although one could conceivably rank journals according to one or several of those features. ‘Reputation’ is perhaps the most quantitative characteristic when measured as citations, so we academics have chosen the lowest-hanging fruit and built our quality-ranking universe around them, for better or worse.

I was never really satisfied with metrics like black-box Impact Factors, so when I started discovering other ways to express the citation performance of the journals to which I regularly submit papers, I became a little more interested in the field of bibliometrics.

In 2014 I wrote a post about what I thought was a fairer way to judge peer-reviewed journal ‘quality’ than the default option of relying solely on ISI® Impact Factors. I was particularly interested in why the new kid on the block — Google Scholar Metrics — gave at times rather wildly different ranks of the journals in which I was interested.

So I came up with a simple mean ranking method to get some idea of the relative citation-based ‘quality’ of these journals.

It was a bit of a laugh, really, but my long-time collaborator, Barry Brook, suggested that I formalise the approach and include a wider array of citation-based metrics in the mean ranks.

Because Barry’s ideas are usually rather good, I followed his advice and together we constructed a more comprehensive, although still decidedly simple, approach to estimate the relative ranks of journals from any selection one would care to cobble together. In this case, however, we also included a rank-placement resampler to estimate the uncertainty associated with each rank.

I’m pleased to announce that the final version1 is now published in PLoS One2. Read the rest of this entry »

Lomborg: a detailed citation analysis

24 04 2015

There’s been quite a bit of palaver recently about the invasion of Lomborg’s ‘Consensus’ Centre to the University of Western Australia, including inter alia that there was no competitive process for the award of $4 million of taxpayer money from the Commonwealth Government, that Lomborg is a charlatan with a not-terribly-well-hidden anti-climate change agenda, and that he his not an academic and possesses no credibility, so he should have no right to be given an academic appointment at one of Australia’s leading research universities.

On that last point, there’s been much confusion among non-academics about what it means to have no credible academic track record. In my previous post, I reproduced a letter from the Head of UWA’s School of Animal Biology, Professor Sarah Dunlop where she stated that Lomborg had a laughably low h-index of only 3. The Australian, in all their brilliant capacity to report the unvarnished truth, claimed that a certain Professor Ian Hall of Griffith University had instead determined that Lomborg’s h-index was 21 based on Harzing’s Publish or Perish software tool. As I show below, if Professor Hall did indeed conclude this, it shows he knows next to nothing about citation indices.

What is a ‘h-index’ and why does it matter? Below I provide an explainer as well as some rigorous analysis of Lomborg’s track record.

Read the rest of this entry »

Hate journal impact factors? Try Google rankings instead

18 11 2013

pecking orderA lot of people hate journal impact factors (IF). The hatred arises for many reasons, some of which are logical. For example, Thomson Reuters ISI Web of Knowledge® keeps the process fairly opaque, so it’s sometimes difficult to tell if journals are fairly ranked. Others hate IF because it does not adequately rank papers within or among sub disciplines. Still others hate the idea that citations should have anything to do with science quality (debatable, in my view). Whatever your reason though, IF are more or less here to stay.

Yes, individual scientists shouldn’t be ranked based only on the IF of the journals in which they publish; there are decent alternatives such as the h-index (which can grow even after you die), or even better, the m-index (or m-quotient; think of the latter as a rate of citation accumulation). Others would rather ditch the whole citation thing altogether and measure some element of ‘impact’, although that elusive little beast has yet to be captured and applied objectively.

So just in case you haven’t already seen it, Google has recently put its journal-ranking hat in the ring with its journal metrics. Having firmly wrested the cumbersome (and expensive) personal citation accumulators from ISI and Scopus (for example) with their very popular (and free!) Google Scholar (which, as I’ve said before, all researchers should set-up and make available), they now seem poised to do the same for journal rankings.

So for your viewing and arguing pleasure, here are the ‘top’ 20 journals in Biodiversity and Conservation Biology according to Google’s h5-index (the h-index for articles published in that journal in the last 5 complete years; it is the largest number h such that h articles published in 2008-2012 have at least h citations each):

Read the rest of this entry »

A posthumous citation tribute for Sodhi

6 11 2012

I’m sitting at a friend’s house in Sydney writing this quick entry before jumping on a plane to London. It’s been a busy few days, and will be an even busier next few weeks.

I met with Paul and Anne Ehrlich yesterday (who are visiting Australia) and we finalised the first complete draft of our book – I will keep you posted on that. In London, I will be meeting with the Journal of Animal Ecology crew on Wednesday night (I’m on the editorial board), followed by two very interesting days at the Zoological Society of London‘s Protected Areas Symposium at Regent’s Park. Then I’ll be off to the Universities of Liverpool and York for a quick lecture tour, followed by a very long trip back home. I’m already tired.

In the meantime, I thought I’d share a little bit of news about our dear and recently deceased friend and colleague, Navjot Sodhi. We’ve already written several times our personal tributes (see here, here and here) to this great mind of conservation thinking who disappeared from us far too soon, but this is a little different. Barry Brook, as is his wont to do, came up with a great idea to get Navjot up posthumously on Google Scholar.
Read the rest of this entry »

Arguing for scientific socialism in ecology funding

26 06 2012

What makes an ecologist ‘successful’? How do you measure ‘success’? We’d all like to believe that success is measured by our results’ transformation of ecological theory and practice – in a conservation sense, this would ultimately mean our work’s ability to prevent (or at least, slow down) extinctions.

Alas, we’re not that good at quantifying such successes, and if you use the global metric of species threats, deforestation, pollution, invasive species and habitat degradation, we’ve failed utterly.

So instead, we measure scientific ‘success’ via peer-reviewed publications, and the citations (essentially, scientific cross-referencing) that arise from these. These are blunt instruments, to be sure, but they are really the only real metrics we have. If you’re not being cited, no one is reading your work; and if no one is reading you’re work, your cleverness goes unnoticed and you help nothing and no one.

A paper I just read in the latest issue of Oikos goes some way to examine what makes a ‘successful’ ecologist (i.e., in terms of publications, citations and funding), and there are some very interesting results. Read the rest of this entry »

%d bloggers like this: