Research in Translation

11 12 2017

Do you enjoy the challenge of communicating complex scientific ideas and conservation issues to the general public? Current Conservation is looking for submissions of reader-friendly summaries of recently published research papers in conservation science!
CC

Current Conservation is a quarterly magazine that communicates conservation science in an accessible manner to a wide audience. Our magazine combines art and science to communicate the latest in research concepts and news from both the natural and social science facets of conservation, encompassing ecology, wildlife biology, conservation biology, environmental history, anthropology and sociology, ecological economics, and related fields of research.

Your summary (~ 250-300 words) should be written in a simple jargon-free way that conveys the nuances of the paper, but at the same time is easy and fun to read. You can find some examples here.
Read the rest of this entry »





100 papers that every ecologist should read

14 11 2017

o-SLEEP-LIBRARY-facebook

If you’re a regular reader of CB.com, you’ll be used to my year-end summaries of the influential conservation papers of that calendar year (e.g., 2016, 2015, 2014, 2013), as somewhat subjectively assessed by F1000 Prime experts. You might also recall that I wrote a post with the slightly provocative title Essential papers you’ve probably never read back in 2015 where I talked about papers that I believe at least my own students should read and appreciate by the time they’ve finished the thesis.

But this raised a much broader question — of all the thousands of papers out there that I should have read/be reading, is there a way to limit the scope and identify the really important ones with at least a hint of objectivity? And I’m certainly not referring to the essential methods papers that you have to read and understand in order to implement their recommended analysis into your own work — these are often specific to the paper you happen to be writing at the moment.

The reason this is important is that there is absolutely no way I can keep on top of my scientific reading, and not only because there are now over 1.5 million papers published across the sciences each year. If you have even the slightest interest in working across sub-disciplines or other disciplines, the challenge becomes more insurmountable. Finding the most pertinent and relevant papers to read, especially when introducing students or young researchers to the concepts, is turning into an increasingly nightmarish task. So, how do we sift through the mountain of articles out there?

It was this question that drove the genesis of our paper that came out only today in Nature Ecology and Evolution entitled ‘100 articles every ecologist should read‘. ‘Our’ in this case means me and my very good friend and brilliant colleague, Dr Franck Courchamp of Université Paris-Sud and the CNRS, with whom I spent a 6-month sabbatical back in 2015. Read the rest of this entry »





When to appeal a rejection

26 08 2017

BegA modified excerpt from my upcoming book for you to contemplate after your next rejection letter.

This is a delicate subject that requires some reflection. Early in my career, I believed the appeal process to be a waste of time. Having made one or two of them to no avail, and then having been on the receiving end of many appeals as a journal editor myself, I thought that it would be a rare occasion indeed when an appeal actually led to a reversal of the final decision.

It turns out that I was very wrong, but not in terms of simple functional probability that you might be thinking. Ironically, the harder it is to get a paper published in a journal, the higher the likelihood that an appeal following rejection will lead to a favourable outcome for the submitting authors. Let me explain. Read the rest of this entry »





Journal ranks 2016

14 07 2017

Many books

Last year we wrote a bibliometric paper describing a new way to rank journals, which I contend is a fairer representation of relative citation-based rankings by combining existing ones (e.g., ISI, Google Scholar and Scopus) into a composite rank. So, here are the 2016 ranks for (i) 93 ecology, conservation and multidisciplinary journals, and a subset of (ii) 46 ecology journals, (iii) 21 conservation journals, just as I have done in previous years (201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





How to respond to reviewers

30 06 2017

Just like there are many styles to writing scientific manuscripts, there are also many ways to respond to a set of criticisms and suggestions from reviewers. Likewise, many people and organisations have compiled lists of what to do, and what not to do, in a response to reviews of your manuscript (just type ‘response to reviewer comments’ or similar phrase into your favourite search engine and behold the reams of available advice).

what

It clearly is a personal choice, but from my own experience as an author, reviewer, editor, and the myriad suggestions available online, there are a few golden rules about how to respond:

  • After you have calmed down a little, it is essential that you remain polite throughout the process. Irrespective of how stupid, unfair, mean-spirited, or just plain lazy the reviewers might appear to you, do not stoop to their level and fire back with defensive, snarky comments. Neither must you ever blame the editor for even the worst types of reviews, because you will do yourself no favours at all by offending the main person who will decide your manuscript’s fate.

Read the rest of this entry »





Credit for reviewing & editing — it’s about bloody time

15 03 2017

clapping-hands-300x225As have many other scientists, I’ve whinged before about the exploitative nature of scientific publishing. What other industry obtains its primary material for free (submitted articles), has its construction and quality control done for free (reviewing & editing), and then sells its final products for immense profit back to the very people who started the process? It’s a fantastic recipe for making oodles of cash; had I been financially cleverer and more ethically bereft in my youth, I would have bought shares in publicly listed publishing companies.

How much time do we spend reviewing and editing each other’s manuscripts? Some have tried to work out these figures and prescribe ideal writing-to-reviewing/editing ratios, but it suffices to say that we spend a mind-bending amount of our time doing these tasks. While we might never reap the financial rewards of reviewing, we can now at least get some nominal credit for the effort.

While it has been around for nearly five years now, the company Publons1 has only recently come to my attention. At first I wondered about the company’s modus operandi, but after discovering that academics can use their services completely free of charge, and that the company funds itself by “… partnering with publishers” (at least someone is getting something out of them), I believe it’s as about as legitimate and above-board as it gets.

So what does Publons do? They basically list the journals for which you have reviewed and/or edited. Whoah! (I can almost hear you say). How do I protect my anonymity? Read the rest of this entry »





Multiculturalism in the lab

23 02 2017

8294047fabf352ce46f4fd9a89d4a93dWith all the nasty nationalism and xenophobia gurgling nauseatingly to the surface of our political discoursethese days, it is probably worth some reflection regarding the role of multiculturalism in science. I’m therefore going to take a stab, despite being in most respects a ‘golden child’ in terms of privilege and opportunity (I am, after all, a middle-aged Caucasian male living in a wealthy country). My cards are on the table.

I know few overtly racist scientists, although I suspect that they do exist. In fact, most scientists are of a more liberal persuasion generally and tend to pride themselves on their objectivity in all aspects of being human, including the sociological ones. In other words, we tend to think of ourselves as dispassionate pluralists who only judge the empirical capabilities of our colleagues, with their races, genders, sexual persuasions and other physical attributes irrelevant to our assessment. We generally love to travel and interact with our peers from all nations and walks of life, and we regularly decorate our offices and with cultural paraphernalia different to our own.

But are we as unbiased and dispassionate as we think we are? Do we take that professed pluralism and cultural promiscuity with us to the lab each day? Perhaps we could, and should, do better. Read the rest of this entry »





Dealing with rejection

8 02 2017

6360351663382153201743264721_ls_crying-menWe scientists can unfortunately be real bastards to each other, and no other interaction brings out that tendency more than peer review. Of course no one, no matter how experienced, likes to have a manuscript rejected. People hate to be on the receiving end of any criticism, and scientists are certainly no different. Many reviews can be harsh and unfair; many reviewers ‘miss the point’ or are just plain nasty.

It is inevitable that you will be rejected outright many times after the first attempt. Sometimes you can counter this negative decision via an appeal, but more often than not the rejection is final no matter what you could argue or modify. So your only recourse is move on to a lower-ranked journal. If you consistently submit to low-ranked journals, you would obviously receive far fewer rejections during the course of your scientific career, but you would also probably minimise the number of citations arising from your work as a consequence.

So your manuscript has been REJECTED. What now? The first thing to remember is that you and your colleagues have not been rejected, only your manuscript has. This might seem obvious as you read these words, but nearly everyone — save the chronically narcissistic — goes through some feelings of self-doubt and inadequacy following a rejection letter. At this point it is essential to remind yourself that your capacity to do science is not being judged here; rather, the most likely explanation is that given your strategy to maximise your paper’s citation potential, you have probably just overshot the target journal. What this really means is that the editor (and/or reviewers) are of the opinion that your paper is not likely to gain as many citations as they think papers in their journal should. Look closely at the rejection letter — does it say anything about “… lacking novelty …”? Read the rest of this entry »





Journal ranks 2015

26 07 2016

graduate_barsBack in February I wrote about our new bibliometric paper describing a new way to rank journals, which I still contend is a fairer representation of relative citation-based rankings. Given that the technique requires ISI, Google Scholar and Scopus data to calculate the composite ranks, I had to wait for the last straggler (Google) to publish the 2015 values before I could present this year’s rankings to you. Google has finally done that.

So in what has become a bit of an annual tradition, I’m publishing the ranks of a mixed list of ecology, conservation and multidisciplinary disciplines that probably cover most of the journals you might be interested in comparing. Like for last year, I make no claims that this list is comprehensive or representative. For previous lists based on ISI Impact Factors (except 2014), see the following links (2008, 2009, 2010, 2011, 2012, 2013).

So here are the following rankings of (i) 84 ecology, conservation and multidisciplinary journals, and a subset of (ii) 42 ecology journals, (iii) 21 conservation journals, and (iv) 12 marine and freshwater journals. Read the rest of this entry »





How to rank journals

18 02 2016

ranking… properly, or at least ‘better’.

In the past I have provided ranked lists of journals in conservation ecology according to their ISI® Impact Factor (see lists for 2008, 2009, 2010, 2011, 2012 & 2013). These lists have proven to be exceedingly popular.

Why are journal metrics and the rankings they imply so in-demand? Despite many people loathing the entire concept of citation-based journal metrics, we scientists, our administrators, granting agencies, award committees and promotion panellists use them with such merciless frequency that our academic fates are intimately bound to the ‘quality’ of the journals in which we publish.

Human beings love to rank themselves and others, the things they make, and the institutions to which they belong, so it’s a natural expectation that scientific journals are ranked as well.

I’m certainly not the first to suggest that journal quality cannot be fully captured by some formulation of the number of citations its papers receive; ‘quality’ is an elusive characteristic that includes inter alia things like speed of publication, fairness of the review process, prevalence of gate-keeping, reputation of the editors, writing style, within-discipline reputation, longevity, cost, specialisation, open-access options and even its ‘look’.

It would be impossible to include all of these aspects into a single ‘quality’ metric, although one could conceivably rank journals according to one or several of those features. ‘Reputation’ is perhaps the most quantitative characteristic when measured as citations, so we academics have chosen the lowest-hanging fruit and built our quality-ranking universe around them, for better or worse.

I was never really satisfied with metrics like black-box Impact Factors, so when I started discovering other ways to express the citation performance of the journals to which I regularly submit papers, I became a little more interested in the field of bibliometrics.

In 2014 I wrote a post about what I thought was a fairer way to judge peer-reviewed journal ‘quality’ than the default option of relying solely on ISI® Impact Factors. I was particularly interested in why the new kid on the block — Google Scholar Metrics — gave at times rather wildly different ranks of the journals in which I was interested.

So I came up with a simple mean ranking method to get some idea of the relative citation-based ‘quality’ of these journals.

It was a bit of a laugh, really, but my long-time collaborator, Barry Brook, suggested that I formalise the approach and include a wider array of citation-based metrics in the mean ranks.

Because Barry’s ideas are usually rather good, I followed his advice and together we constructed a more comprehensive, although still decidedly simple, approach to estimate the relative ranks of journals from any selection one would care to cobble together. In this case, however, we also included a rank-placement resampler to estimate the uncertainty associated with each rank.

I’m pleased to announce that the final version1 is now published in PLoS One2. Read the rest of this entry »





Getting your conservation science to the right people

22 01 2016

argument-cartoon-yellingA perennial lament of nearly every conservation scientist — at least at some point (often later in one’s career) — is that the years of blood, sweat and tears spent to obtain those precious results count for nought in terms of improving real biodiversity conservation.

Conservation scientists often claim, especially in the first and last paragraphs of their papers and research proposals, that by collecting such-and-such data and doing such-and-such analyses they will transform how we manage landscapes and species to the overall betterment of biodiversity. Unfortunately, most of these claims are hollow (or just plain bullshit) because the results are either: (i) never read by people who actually make conservation decisions, (ii) not understood by them even if they read the work, or (iii) never implemented because they are too vague or too unrealistic to translate into a tangible, positive shift in policy.

A depressing state of being, I know.

This isn’t any sort of novel revelation, for we’ve been discussing the divide between policy makers and scientists for donkey’s years. Regardless, the whinges can be summarised succinctly: Read the rest of this entry »





The sticky subject of article authorship

2 10 2015

CriticVs.Shakespeare-copyI have a few ‘rules’ (a.k.a. ‘guidelines’) in my lab about the authorship of articles, but I’ve come to realise that each article requires its own finessing each time authorship is in question. After a lengthy discussion yesterday with the members of Franck Courchamp‘s lab, I decided I should probably write down my thoughts on this, one of the stickiest of subjects in the business of science.

The following discussion can be divided into to two main categories: (1) who to include as a co-author, and once the list of co-authors has been determined, (2) in what order should they be listed?

Before launching into discussing the issues related to Category 1, it is prudent to declare that there are as probably as many conventions as there are publishing scientists, and each discipline’s most general conventions differ across the scientific spectrum. I’m sure if you asked 10 people about what they considered appropriate, you could conceivably receive 10 different answers.

That said, I do still think there are some good-behaviour guidelines on authorship that one should strive to follow, all of which are based on my own experiences (both good and awful).

So who to include? It seems like a simple question superficially because clearly if someone contributed to writing a peer-reviewed article, he/she should be listed as a co-author. The problem really doesn’t concern the main author (the person who did most of the actual composition) because it’s clear here who that will be in almost every case. In most circumstances, this also happens to be the lead author (but more on that below). The question should really apply then to those individuals whose effort was more modest in the production of the final paper.

Strictly speaking, an ‘author’ should write words; but how many words do they need to write before being included? Would 10 suffice, or at least 10%? You can see why this is in itself a sticky subject because there are no established or accepted thresholds. Of course, science generally requires much more than just writing words: there are for most papers experiments to design, grants to obtain to fund them, data to collect, analysis and modelling to be done, figures and tables to prepare and finally, words to write. I’ll admit that I’ve co-authored many papers where I’ve done mainly one of those things (analysis, data collection, etc.), but I can also hold my hand over my heart and state that I’ve contributed more than a good deal to the actual writing of the paper in all circumstances where I’ve been listed as a co-author (the amount of which depends entirely on the lead author’s writing capacity). Read the rest of this entry »