Journal ranks 2016

14 07 2017

Many books

Last year we wrote a bibliometric paper describing a new way to rank journals, which I contend is a fairer representation of relative citation-based rankings by combining existing ones (e.g., ISI, Google Scholar and Scopus) into a composite rank. So, here are the 2016 ranks for (i) 93 ecology, conservation and multidisciplinary journals, and a subset of (ii) 46 ecology journals, (iii) 21 conservation journals, just as I have done in previous years (201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





How to respond to reviewers

30 06 2017

Just like there are many styles to writing scientific manuscripts, there are also many ways to respond to a set of criticisms and suggestions from reviewers. Likewise, many people and organisations have compiled lists of what to do, and what not to do, in a response to reviews of your manuscript (just type ‘response to reviewer comments’ or similar phrase into your favourite search engine and behold the reams of available advice).

what

It clearly is a personal choice, but from my own experience as an author, reviewer, editor, and the myriad suggestions available online, there are a few golden rules about how to respond:

  • After you have calmed down a little, it is essential that you remain polite throughout the process. Irrespective of how stupid, unfair, mean-spirited, or just plain lazy the reviewers might appear to you, do not stoop to their level and fire back with defensive, snarky comments. Neither must you ever blame the editor for even the worst types of reviews, because you will do yourself no favours at all by offending the main person who will decide your manuscript’s fate.

Read the rest of this entry »





Credit for reviewing & editing — it’s about bloody time

15 03 2017

clapping-hands-300x225As have many other scientists, I’ve whinged before about the exploitative nature of scientific publishing. What other industry obtains its primary material for free (submitted articles), has its construction and quality control done for free (reviewing & editing), and then sells its final products for immense profit back to the very people who started the process? It’s a fantastic recipe for making oodles of cash; had I been financially cleverer and more ethically bereft in my youth, I would have bought shares in publicly listed publishing companies.

How much time do we spend reviewing and editing each other’s manuscripts? Some have tried to work out these figures and prescribe ideal writing-to-reviewing/editing ratios, but it suffices to say that we spend a mind-bending amount of our time doing these tasks. While we might never reap the financial rewards of reviewing, we can now at least get some nominal credit for the effort.

While it has been around for nearly five years now, the company Publons1 has only recently come to my attention. At first I wondered about the company’s modus operandi, but after discovering that academics can use their services completely free of charge, and that the company funds itself by “… partnering with publishers” (at least someone is getting something out of them), I believe it’s as about as legitimate and above-board as it gets.

So what does Publons do? They basically list the journals for which you have reviewed and/or edited. Whoah! (I can almost hear you say). How do I protect my anonymity? Read the rest of this entry »





Multiculturalism in the lab

23 02 2017

8294047fabf352ce46f4fd9a89d4a93dWith all the nasty nationalism and xenophobia gurgling nauseatingly to the surface of our political discoursethese days, it is probably worth some reflection regarding the role of multiculturalism in science. I’m therefore going to take a stab, despite being in most respects a ‘golden child’ in terms of privilege and opportunity (I am, after all, a middle-aged Caucasian male living in a wealthy country). My cards are on the table.

I know few overtly racist scientists, although I suspect that they do exist. In fact, most scientists are of a more liberal persuasion generally and tend to pride themselves on their objectivity in all aspects of being human, including the sociological ones. In other words, we tend to think of ourselves as dispassionate pluralists who only judge the empirical capabilities of our colleagues, with their races, genders, sexual persuasions and other physical attributes irrelevant to our assessment. We generally love to travel and interact with our peers from all nations and walks of life, and we regularly decorate our offices and with cultural paraphernalia different to our own.

But are we as unbiased and dispassionate as we think we are? Do we take that professed pluralism and cultural promiscuity with us to the lab each day? Perhaps we could, and should, do better. Read the rest of this entry »





Dealing with rejection

8 02 2017

6360351663382153201743264721_ls_crying-menWe scientists can unfortunately be real bastards to each other, and no other interaction brings out that tendency more than peer review. Of course no one, no matter how experienced, likes to have a manuscript rejected. People hate to be on the receiving end of any criticism, and scientists are certainly no different. Many reviews can be harsh and unfair; many reviewers ‘miss the point’ or are just plain nasty.

It is inevitable that you will be rejected outright many times after the first attempt. Sometimes you can counter this negative decision via an appeal, but more often than not the rejection is final no matter what you could argue or modify. So your only recourse is move on to a lower-ranked journal. If you consistently submit to low-ranked journals, you would obviously receive far fewer rejections during the course of your scientific career, but you would also probably minimise the number of citations arising from your work as a consequence.

So your manuscript has been REJECTED. What now? The first thing to remember is that you and your colleagues have not been rejected, only your manuscript has. This might seem obvious as you read these words, but nearly everyone — save the chronically narcissistic — goes through some feelings of self-doubt and inadequacy following a rejection letter. At this point it is essential to remind yourself that your capacity to do science is not being judged here; rather, the most likely explanation is that given your strategy to maximise your paper’s citation potential, you have probably just overshot the target journal. What this really means is that the editor (and/or reviewers) are of the opinion that your paper is not likely to gain as many citations as they think papers in their journal should. Look closely at the rejection letter — does it say anything about “… lacking novelty …”? Read the rest of this entry »





Journal ranks 2015

26 07 2016

graduate_barsBack in February I wrote about our new bibliometric paper describing a new way to rank journals, which I still contend is a fairer representation of relative citation-based rankings. Given that the technique requires ISI, Google Scholar and Scopus data to calculate the composite ranks, I had to wait for the last straggler (Google) to publish the 2015 values before I could present this year’s rankings to you. Google has finally done that.

So in what has become a bit of an annual tradition, I’m publishing the ranks of a mixed list of ecology, conservation and multidisciplinary disciplines that probably cover most of the journals you might be interested in comparing. Like for last year, I make no claims that this list is comprehensive or representative. For previous lists based on ISI Impact Factors (except 2014), see the following links (2008, 2009, 2010, 2011, 2012, 2013).

So here are the following rankings of (i) 84 ecology, conservation and multidisciplinary journals, and a subset of (ii) 42 ecology journals, (iii) 21 conservation journals, and (iv) 12 marine and freshwater journals. Read the rest of this entry »





Subconsciously sexist?

29 06 2016

2000px-Igualtat_de_sexes.svgIt was with some consternation that I processed some recent second-hand scuttlebutt about my publishing history with respect to gender balance. I’ve always considered myself non-sexist when it comes to working with my colleagues, but as a white, middle-aged male, I’m willing to admit that perhaps subconsciously I’ve been promoting gender inequalities in science without realising that I’m doing it. As a father of a daughter, I also want to make sure the world in which she grows up isn’t as difficult as it has been for women of previous generations.

It is still an unfortunate fact that the ideal of a 50–50 gender balance in the biological sciences is far from becoming a reality; indeed, women have to be about 2.2-2.5 times more productive than their male counterparts to be as successful in securing financial support to do their work.

In fact, a 1993 study of ecologists attributed the lower (but happily, increasing) productivity and dwindling representation of women with career stage to such institutionalised injustices as: less satisfactory relationships with PhD advisors, difficulty in finding suitable mentors, lack of institutional empowerment, greater family responsibilities, lower salaries, lower job security, and lower evaluation of personal success. A follow-up study in 2012 suggested that the gap was narrowing in many of these components, but it was still far from equal. For a more comprehensive discussion of the complexity of the issues in science (see here), and in ecology in particular, see here.

Others have more recently reported no evidence for a gender effect in paper acceptance rates (Biological Conservation), and no difference in the level of perceived expertise between men and women (in long-term environmental or ecological research at 60 protected areas stratified across forests of the Asia-Pacific, African and American tropics).

Read the rest of this entry »





Shadow of ignorance veiling society despite more science communication

19 04 2016

imagesI’ve been thinking about this post for a while, but it wasn’t until having some long, deep chats today with staff and students at Simon Fraser University‘s Department of Biological Sciences (with a particular hat-tip to the lovely Nick Dulvy, Isabelle Côté & John Reynolds) that the full idea began to take shape in my brain. It seems my presentation was a two-way street: I think I taught a few people some things, and they taught me something back. Nice.

There’s no question at all that science communication has never before been so widespread and of such high quality. More and more scientists and science students are now blogging, tweeting and generally engaging the world about their science findings. There is also an increasing number of professional science communication associations out there, and a growing population of professional science communicators. It is possibly the best time in history to be involved in the generation and/or communication of scientific results.

Why then is the public appreciation, acceptance and understanding of science declining? It really doesn’t make much sense if you merely consider that there has never been more good science ‘out there’ in the media — both social and traditional. For the source literature itself, there has never before been as many scientific journals, articles and even scientists writing. Read the rest of this entry »





How to rank journals

18 02 2016

ranking… properly, or at least ‘better’.

In the past I have provided ranked lists of journals in conservation ecology according to their ISI® Impact Factor (see lists for 2008, 2009, 2010, 2011, 2012 & 2013). These lists have proven to be exceedingly popular.

Why are journal metrics and the rankings they imply so in-demand? Despite many people loathing the entire concept of citation-based journal metrics, we scientists, our administrators, granting agencies, award committees and promotion panellists use them with such merciless frequency that our academic fates are intimately bound to the ‘quality’ of the journals in which we publish.

Human beings love to rank themselves and others, the things they make, and the institutions to which they belong, so it’s a natural expectation that scientific journals are ranked as well.

I’m certainly not the first to suggest that journal quality cannot be fully captured by some formulation of the number of citations its papers receive; ‘quality’ is an elusive characteristic that includes inter alia things like speed of publication, fairness of the review process, prevalence of gate-keeping, reputation of the editors, writing style, within-discipline reputation, longevity, cost, specialisation, open-access options and even its ‘look’.

It would be impossible to include all of these aspects into a single ‘quality’ metric, although one could conceivably rank journals according to one or several of those features. ‘Reputation’ is perhaps the most quantitative characteristic when measured as citations, so we academics have chosen the lowest-hanging fruit and built our quality-ranking universe around them, for better or worse.

I was never really satisfied with metrics like black-box Impact Factors, so when I started discovering other ways to express the citation performance of the journals to which I regularly submit papers, I became a little more interested in the field of bibliometrics.

In 2014 I wrote a post about what I thought was a fairer way to judge peer-reviewed journal ‘quality’ than the default option of relying solely on ISI® Impact Factors. I was particularly interested in why the new kid on the block — Google Scholar Metrics — gave at times rather wildly different ranks of the journals in which I was interested.

So I came up with a simple mean ranking method to get some idea of the relative citation-based ‘quality’ of these journals.

It was a bit of a laugh, really, but my long-time collaborator, Barry Brook, suggested that I formalise the approach and include a wider array of citation-based metrics in the mean ranks.

Because Barry’s ideas are usually rather good, I followed his advice and together we constructed a more comprehensive, although still decidedly simple, approach to estimate the relative ranks of journals from any selection one would care to cobble together. In this case, however, we also included a rank-placement resampler to estimate the uncertainty associated with each rank.

I’m pleased to announce that the final version1 is now published in PLoS One2. Read the rest of this entry »





Bad science

10 02 2016

Head in HandsIn addition to the surpassing coolness of reconstructing long-gone ecosystems, my new-found enthusiasm for palaeo-ecology has another advantage — most of the species under investigation are already extinct.

That might not sound like an ‘advantage’, but let’s face it, modern conservation ecology can be bloody depressing, so much so that one sometimes wonders if it’s worth it. It is, of course, but there’s something marvellously relieving about studying extinct systems for the simple reason that there are no political repercussions. No self-serving, plutotheocratic politician can bugger up these systems any more. That’s a refreshing change from the doom and gloom of modern environmental science!

But it’s not all sweetness and light, of course; there are still people involved, and people sometimes make bad decisions in an attempt to modify the facts to suit their creed. The problem is when these people are the actual scientists involved in the generation of the ‘facts’.

As I alluded to a few weeks ago with the publication of our paper in Nature Communications describing the lack of evidence for a climate effect on the continental-scale extinctions of Australia’s megafauna, we have a follow-up paper that has just been published online in Proceedings of the Royal Society B — What caused extinction of the Pleistocene megafauna of Sahul? led by Chris Johnson of the University of Tasmania.

After our paper published earlier this month, this title might seem a bit rhetorical, so I want to highlight some of the reasons why we wrote the review. Read the rest of this entry »





Getting your conservation science to the right people

22 01 2016

argument-cartoon-yellingA perennial lament of nearly every conservation scientist — at least at some point (often later in one’s career) — is that the years of blood, sweat and tears spent to obtain those precious results count for nought in terms of improving real biodiversity conservation.

Conservation scientists often claim, especially in the first and last paragraphs of their papers and research proposals, that by collecting such-and-such data and doing such-and-such analyses they will transform how we manage landscapes and species to the overall betterment of biodiversity. Unfortunately, most of these claims are hollow (or just plain bullshit) because the results are either: (i) never read by people who actually make conservation decisions, (ii) not understood by them even if they read the work, or (iii) never implemented because they are too vague or too unrealistic to translate into a tangible, positive shift in policy.

A depressing state of being, I know.

This isn’t any sort of novel revelation, for we’ve been discussing the divide between policy makers and scientists for donkey’s years. Regardless, the whinges can be summarised succinctly: Read the rest of this entry »





Scientific conspiracies are impossible

9 06 2015

madscientistWe’ve all heard it somewhere before: “It’s all just a big conspiracy and those bloody scientists are just trying to protect their funding sources.”

Whether it’s about climate change, pharmacology, genetically modified organisms or down-to-earth environmentalism, people who don’t want to agree with a particular scientific finding often invoke the conspiracy argument.

There are three main reasons why conspiracies among scientists are impossible. First, most scientists are just not that organised, nor do they have the time to get together to plan such elaborate practical jokes on the public. We can barely keep our own shit together than try to construct a water-tight conspiracy. I’ve never met a scientist who would be capable of doing this, let alone who would want to.

But this doesn’t necessarily prove my claim that it is ‘impossible’. Most importantly, the idea that a conspiracy could form among scientists ignores one of the most fundamental components of scientific progress — dissension; and bloody hell, can we dissent! The scientific approach is one where successive lines of evidence testing hypotheses are eventually amassed into a concept, then perhaps a rule of thumb. Read the rest of this entry »





Write English well? Help get published someone who doesn’t

27 01 2015

imagesI’ve written before about how sometimes I can feel a little exasperated by what seems to be a constant barrage of bad English from some of my co-authors. No, I’m not focussing solely on students, or even native English speakers for that matter. In fact, one of the best (English) science writers with whom I’ve had the pleasure of working is a Spaniard (he also happens to write particularly well in Castellano). He was also fairly high up on the command-of-English ladder when he started out as my PhD student. So. There.

In other words, just because you grew up speaking the Queen’s doesn’t automatically guarantee that you’ll bust a phrase as easily as Shakespeare, Tolkien, Gould or Flannery; in fact, it might put you at a decided disadvantage compared to your English-as-a-second- (-third-, -fourth-, -fifth- …) language peers because they avoided learning all those terrible habits you picked up as you grunted your way through adolescence. Being forced to learn the grammar of another language often tends to make you grasp that of your mother tongue a little better.

So regardless of your background, if you’ve managed to beat the odds and know in your heart that you are in fact a good writer of science in English (you know who you are), I think you have a moral duty to help out those who still struggle with it. I’m not referring necessarily to the inevitable corrections you’ll make to your co-authors’ prose when drafting manuscripts1. I am instead talking about going out of your way to help someone who really, really needs it. Read the rest of this entry »





How to review a scientific paper

30 09 2014

F6a00d834521baf69e200e55471d80f8833-800wiollowing one of the most popular posts on ConservationBytes.com, as well as in response to several requests, I’ve decided to provide a few pointers for early-career scientists for reviewing manuscripts submitted to peer-reviewed journals.

Apart from publishing your first peer-reviewed paper – whether it’s in Nature or Corey’s Journal of Bullshit – receiving that first request to review a manuscript is one of the best indications that you’ve finally ‘made it’ as a recognised scientist. Finally, someone is acknowledging that you are an expert and that your opinions and critiques are important. You deserve to feel proud when this happens.

Of course, reviewing is the backbone of the scientific process, because it is the main component of science’s pursuit of objectivity (i.e., subjectivity reduction). No other human endeavour can claim likewise.

It is therefore essential to take the reviewing process seriously, even if you do so only from the entirely selfish perspective that if you do not, no one will seriously review your own work. It is therefore much more than an altruistic effort to advance human knowledge – it is at the very least a survival mechanism. Sooner or later if you get a reputation for providing bad reviews, or refuse to do them, your own publication track record will suffer as a result.

Just like there are probably as many different (successful) ways to write a scientific paper as there are journals, most people develop their own approaches for reviewing their colleagues’ work. But just as it’s my opinion that many journal editors do an awful job of editing, I know that many reviewers do rather a shit job at their assigned tasks. This perspective comes from many years as an author, a reviewer, an editor and a mentor.

So take my advice as you will – hopefully some of it will prove useful when you review manuscripts. Read the rest of this entry »





Attention Ecologists: Journal Ranking Survey

16 09 2014

journal rankingIn the interest of providing greater transparency when ranking the ‘quality’ of scientific journals, we are interested in collecting ecologists’ views on the relative impact of different ecology, conservation and multidisciplinary journals. If you’re a publishing ecologist, we want your personal opinion on a journal’s relative rank from this sample of 25 peer-reviewed journals. Please do not consult Impact Factors or other journal rankings to decide – just go with your ‘gut’ feeling.

We chose a sample of 25 authoritative journals in the field (listed below alphabetically). Use the drop-down menus to select a categorical rank. Make sure you’ve allocated categories 1 through to 4 at least once in the sample of 25. Category 5 (‘Other’) is optional.

The survey should take you only a few minutes to complete. Thanks for your time!





A fairer way to rank conservation and ecology journals in 2014

1 08 2014

Normally I just report the Thomson-Reuters ISI Web of Knowledge Impact Factors for conservation-orientated journals each year, with some commentary on the rankings of other journals that also publish conservation-related material from time to time (see my lists of the 2008200920102011 and 2012 Impact Factor rankings).

This year, however, I’m doing something different given the growing negativity towards Thomson-Reuters’ secretive behaviour (which they’ve promised this year to rectify by being more transparent) and the generally poor indication of quality that the Impact Factor represents. Although the 2013 Impact Factors have just been released (very late this year, for some reason), I’m going to compare them to the increasingly reputable Google Scholar Journal Metrics, which intuitively make more sense to me, are transparent and turn a little of the rankings dogma on its ear.

In addition to providing both the Google metric as well as the Impact Factor rankings, I’ve come up with a composite (average) rank from the two systems. I think ranks are potentially more useful than raw corrected citation metrics because you must first explicitly set your set of journals to compare. I also go one step further and modify the average ranking with a penalty term that is essentially the addition of the coefficient of variation of rank disparity between the two systems.

Read on for the results.

Read the rest of this entry »





Be a good reviewer, but be a better editor

6 06 2014
© evileditor.blogspot.com.au

© evileditor.blogspot.com.au

Perhaps it’s just that I’ve been at this for a while, or maybe it’s a real trend. Regardless, many of my colleagues and I are now of the opinion that the quality of editing in scientific journals is on the downhill slide.

Yes – we (scientists) all complain about negative decisions from journals to which we’ve submitted our work. Being rejected is part of the process. Aiming high is necessary for academic success, but when a negative decision is made on the basis of (often one) appalling review, it’s a little harder to swallow.

I suppose I can accept the inevitability of declining review quality for the simple reason that there are now SO MANY papers to review that finding willing volunteers is difficult. This means that there will always be people who only glance cursorily at the paper, miss the detail and recommend rejection based on their own misunderstanding or bias. It’s far easier to skim a paper and try to find a reason to reject than actually putting in the time to appraise the work critically and fairly.

This means that the traditional model of basing the decision to accept or reject a manuscript on only two reviews is fraught because the probability of receiving poor reviews is rising. For example, a certain undisclosed journal of unquestionably high quality for which I edit does not accept anything less than six recommendations for reviewers per manuscript, and none that I’m aware of is accepted or rejected based on only two reviews. But I think this is the exception rather than the rule – there are simply too many journals now of low to medium quality to be able to get that many reviewers to agree to review.

I won’t spend too much time trying to encourage you to do the best job you can when reviewing – that should go without saying. Remember what goes around comes around. If you are a shit reviewer, you will receive shit reviews. Read the rest of this entry »





Scientists should blog

27 05 2014
© Bill Porter

© Bill Porter

As ConservationBytes.com is about to tick over 1 million hits since its inception in mid-2008, I thought I’d share why I think more scientists should blog about their work and interests.

As many of you know, I regularly give talks and short courses on the value of social and other media for scientists; in fact, my next planned ‘workshop’ (Make Your Science Matter) on this and related subjects will be held at the Ecological Society of Australia‘s Annual Conference in Alice Springs later this year.

I’ve written before about the importance of having a vibrant, attractive and up-to-date online profile (along with plenty of other tips), but I don’t think I’ve ever put down my thoughts on blogging in particular. So here goes.

  1. The main reasons scientists should consider blogging is the hard, cold fact that not nearly enough people read scientific papers. Most scientists are lucky if a few of their papers ever top 100 citations, and I’d wager that most are read by only a handful of specialists (there are exceptions, of course, but these are rare). If you’re a scientist, I don’t have to tell you the disappointment of realising that the blood, sweat and tears shed over each and every paper is largely for nought considering just how few people will ever read our hard-won results. It’s simply too depressing to contemplate, especially considering that the sum of human knowledge is so vast and expanding that this trend will only ever get worse. For those reasons alone, blogging about your own work widens the readership by orders of magnitude. More people read my blog every day than will probably ever read the majority of my papers. Read the rest of this entry »




Hate journal impact factors? Try Google rankings instead

18 11 2013

pecking orderA lot of people hate journal impact factors (IF). The hatred arises for many reasons, some of which are logical. For example, Thomson Reuters ISI Web of Knowledge® keeps the process fairly opaque, so it’s sometimes difficult to tell if journals are fairly ranked. Others hate IF because it does not adequately rank papers within or among sub disciplines. Still others hate the idea that citations should have anything to do with science quality (debatable, in my view). Whatever your reason though, IF are more or less here to stay.

Yes, individual scientists shouldn’t be ranked based only on the IF of the journals in which they publish; there are decent alternatives such as the h-index (which can grow even after you die), or even better, the m-index (or m-quotient; think of the latter as a rate of citation accumulation). Others would rather ditch the whole citation thing altogether and measure some element of ‘impact’, although that elusive little beast has yet to be captured and applied objectively.

So just in case you haven’t already seen it, Google has recently put its journal-ranking hat in the ring with its journal metrics. Having firmly wrested the cumbersome (and expensive) personal citation accumulators from ISI and Scopus (for example) with their very popular (and free!) Google Scholar (which, as I’ve said before, all researchers should set-up and make available), they now seem poised to do the same for journal rankings.

So for your viewing and arguing pleasure, here are the ‘top’ 20 journals in Biodiversity and Conservation Biology according to Google’s h5-index (the h-index for articles published in that journal in the last 5 complete years; it is the largest number h such that h articles published in 2008-2012 have at least h citations each):

Read the rest of this entry »





Making the scientific workshop work

28 10 2013
I don't mean this

I don’t mean this

I’ve been a little delayed in blogging this month, but for a very good reason – I’ve just experienced one of the best workshops of my career. I’d like to share a little of that perfect science recipe with you now.

I’ve said it before, but it can stand being repeated: done right, workshops can be some of the most efficient structures for doing big science.

First, let me define ‘workshop’ for those of you who might have only a vague notion of what it entails. To me, a workshop is a small group of like-minded scientists – all of whom possess different skills and specialities – who are brought together to achieve one goal. That goal is writing the superlative manuscript for publication.

So I don’t mean just a bog-standard chin-wag infected with motherhoods and diatribes. Workshops are not mini-conferences; neither are they soap boxes. It is my personal view that nothing can waste a scientist’s precious time more than an ill-planned and aimless workshop.

But with a little planning and some key ingredients that I’ll list shortly, you can turn a moderately good idea into something that can potentially shake the foundations of an entire discipline. So what are these secret ingredients? Read the rest of this entry »