Journal ranks 2022

21 07 2023

As I’ve done every year for the last 15 years, I can now present the 2022 conservation / ecology / sustainability journal ranks based on my (published) journal-ranking method.

Although both the Clarivate (Impact Factor, Journal Citation Indicator, Immediacy Index) and Scopus (CiteScore, Source-Normalised Impact Per Paper, SCImago Journal Rank) values have been out for about a month or so, the Google (h5-index, h5-median) scores only came out yesterday.

This year’s also a bit weird from the perspective of the Clarivate ranks. First, Impact Factors will no longer be provided to three significant digits, but only to one (e.g., 7.2 versus 7.162). That’s not such a big deal, but it does correct for relative ranks based on false precision. However, the biggest changes are more methdological — Impact Factors now take online articles into account (in the denominator), so most journals will have a lower Impact Factor this year compared to last. In fact, of the 105 journals in the ecology/conservation/multidisciplinary category that have data for both 2021 and 2022, the 2022 Impact Factors are a median 15% lower than the 2021 values.

Another effect in play appears to have been the pandemic. The worst of the pandemic happened right during the assessment period, and I’m pretty sure this is reflected both in terms of the number of articles published (down a median of 10%) and total number of citations in the assessment period (down 7%) per journal.

But using my method, these changes a somewhat irrelevant because I calculate relative ranks, not an absolute score.

I therefore present the new 2022 ranks for: (i) 108 ecology, conservation and multidisciplinary journals, (ii) 28 open-access (i.e., you have to pay) journals from the previous category, (iii) 66 ‘ecology’ journals, (iv) 31 ‘conservation’ journals, (v) 43 ‘sustainability’ journals (with general and energy-focussed journals included), and (vi) 21 ‘marine & freshwater’ journals.

Here are the results:

Read the rest of this entry »




Never underestimate the importance of a good figure

27 07 2022

I seem to end up frequently explaining to students and colleagues that it’s a good idea to spend a good deal of time to make your scientific figures the most informative and attractive possible.

But it’s a fine balance between overly flashy and downright boring. Needless to say, empirical accuracy is paramount.

But why should you care, as long as the necessary information is transferred to the reader? The most important answer to that question is that you are trying to catch the attention of editors, reviewers, and readers alike in a highly competitive sea of information. Sure, if the work is good and the paper well-written, you’ll still garner a readership; however, if you give your readers a bit of visual pleasure in the process, they’re much more likely to (a) remember and (b) cite your paper.

I try to ask myself the following when creating a figure — without unnecessary bells and whistles, would I present this figure in a presentation to a group of colleagues? Would I present it to an audience of non-experts? Would I want this figure to appear in a news article about my work? Of course, all of these venues require differing degrees of accuracy, complexity, and aesthetics, but a good figure should ideally serve to educate across very different audiences simultaneously.

A sub-question worth asking here is whether you think a colleague would use your figure in one of their presentations. Think of the last time you made a presentation and found that perfect figure that brilliantly portrays the point you are trying to get across. That’s the kind of figure you should strive to make in your own research papers.

I therefore tend to spend quite a bit of time crafting my figures, and after years of making mistakes and getting a few things right, and retrospectively discovering which figures appear to garner more attention than others, I can offer some basic advice about the DOs and DON’Ts of figure making. Throughout the following section I provide some examples from my own papers that I think demonstrate some of the concepts.

tables vs. graphs — The very first question you should ask yourself is whether you can turn that boring and ugly table into a graph of some sort. Do you really need that table? Can you not just translate the cell entries into a bar/column/xy plot? If you can, you should. When a table cannot easily be translated into a figure, most of the time it probably belongs in the Supplementary Information anyway.

Read the rest of this entry »




Journal ranks 2021

4 07 2022

Now that Clarivate, Google, and Scopus have recently published their respective journal citation scores for 2021, I can now present — for the 14th year running on ConvervationBytes.com — the 2021 conservation/ecology/sustainability journal ranks based on my journal-ranking method.

Like last year, I’ve added a few journals. I’ve also included in the ranking the Journal Citation Indicator (JCI) in addition to the Journal Impact Factor and Immediacy Index from Clarivate ISI, and the CiteScore (CS) in addition to the Source-Normalised Impact Per Paper (SNIP) and SCImago Journal Rank (SJR) from Scopus. 

You can access the raw data for 2021 and use my RShiny app to derive your own samples of journal ranks.

I therefore present the new 2021 ranks for: (i) 106 ecology, conservation and multidisciplinary journals, (ii) 27 open-access (i.e., you have to pay) journals from the previous category, (iii) 64 ‘ecology’ journals, (iv) 32 ‘conservation’ journals, (v) 43 ‘sustainability’ journals (with general and energy-focussed journals included), and (vi) 21 ‘marine & freshwater’ journals.

Remember not to take much notice if a journal boasts about how its Impact Factor has increased this year, because these tend to increase over time anyway What’s important is a journal’s relative (to other journals) rank.

Here are the results:

Read the rest of this entry »




What is the role of today’s academic society?

29 04 2022

This is not a rhetorical question. I really do want to solicit responses to the aspects I will raise in this post, because I have to admit that I’m a little unclear on the subject.

Preamble — While I do not intend to deflate the value of any particular academic society, I’m sure some might take offence to the mere notion that someone would dare challenge the existence of academic societies. I confess to have belonged to several academic societies in my career, but haven’t bothered for some time given the uncertainties I describe below.

A Subjective History

In my view, the academic society represented an important evolutionary step in the organisation of thematic collegiality. As disciplines became ever more specialised, it was an opportunity to unite like-minded colleagues and support new generations of academics in the field.

In the pre-internet days, academic societies provided the necessary fora to interact directly with one’s peers and advance. They also published thematic journals, organised field trips, garnered funds for scholarships, recognised prowess via awards, and crafted and promulgated constitutions on issues as varied as academic behaviour, societal warnings, governance, and politics.

Face-to-face meetings were indeed the primary vehicle for these interactions, and are a mainstay even in today’s pandemic world (but more discussion on the modern implications of these below).

Peer-reviewed disciplinary journals were arguably one of the most important products of the academic society. Back before academic publishing became the massive, profit-churning, mega-machine rort that it is today, such journals were integral to the development of different academic fields.

Read the rest of this entry »




Plea of the Predatory Publisher (A Lament)

21 09 2021
Illustration by David Parkins (nature.com/articles/d41586-019-03759-y)

“Salutations! I hope you are safe and doing well,
[change font] Dear Doctor [insert name, surname, and initial]”

“We read your prestigious paper [insert TITLE here],”
(hmmm — you seem to take me for a fool, I fear)

“and we’d appreciated [sic] if you could submit your Research work”
(yep, they really must think I’m a berk)

“Your participation is extremely valuable to us,
here at Scientific Archives of Researches (or some such)”

“You may submit online, or as an attachment to this email address,”
(and I guess you’ll promise a minimal assessment sans stress?)

“In the pursuit of researches of quality best,
our open-access fees are among the most modest”

“We must be clear that this is not a scam”
(just how fucking thick do you think I am?)

“We will be waiting for your positive reply”
(this is the fifth one of these I’ve received today, sigh)

“Please let us know your acceptance to join the eminent author list,
and with any query, I will be most happy to assist”

“Sincerely, Profesor [sic] Gonar L Schlidt”
(does anyone actually fall for this shit?)

Read the rest of this entry »




Journal ranks 2020

23 07 2021

This is the 13th year in a row that I’ve generated journal ranks based on the journal-ranking method we published several years ago.

There are few differences in how I calculated this year’s ranks, as well as some relevant updates:

  1. As always, I’ve added a few new journals (either those who have only recently been scored with the component metrics, or ones I’ve just missed before);
  2. I’ve included the new ‘Journal Citation Indicator’ (JCI) in addition to the Journal Impact Factor and Immediacy Index from Clarivate ISI. JCI “… a field-normalised metric, represents the average category-normalised citation impact for papers published in the prior three-year period.”. In other words, it’s supposed to correct for field-specific citation trends;
  3. While this isn’t my change, the Clarivate metrics are now calculated based on when an article is first published online, rather than just in an issue. You would have thought that this should have been the case for many years, but they’ve only just done it;
  4. I’ve also added the ‘CiteScore’ (CS) in addition to the Source-Normalised Impact Per Paper (SNIP) and SCImago Journal Rank (SJR) from Scopus. CS is “the number of citations, received in that year and previous 3 years, for documents published in the journal during that period (four years), divided by the total number of published documents … in the journal during the same four-year period”;
  5. Finally, you can access the raw data for 2020 (I’ve done the hard work for you) and use my RShiny app to derive your own samples of journal ranks (also see the relevant blog post). You can add new journal as well to the list if my sample isn’t comprehensive enough for you.

Since the Google Scholar metrics were just released today, I present the new 2020 ranks for: (i) 101 ecology, conservation and multidisciplinary journals, and a subset of (ii) 61 ‘ecology’ journals, (iii) 29 ‘conservation’ journals, (iv) 41 ‘sustainability’ journals (with general and energy-focussed journals included), and (v) 20 ‘marine & freshwater’ journals.

One final observation. I’ve noted that several journals are boasting about how their Impact Factors have increased this year, when they fail to mention that this is the norm across most journals. As you’ll see below, relative ranks don’t actually change that much for most journals. In fact, this is a redacted email I received from a journal that I will not identify here:

We’re pleased to let you know that the new Impact Factor for [JOURNAL NAME] marks a remarkable increase, as it now stands at X.XXX, compared to last year’s X.XXX. And what is even more important: [JOURNAL NAME] increased its rank in the relevant disciplines: [DISCIPLINE NAME].

Although the Impact Factor may not be the perfect indicator of success, it remains the most widely recognised one at journal level. Therefore, we’re excited to share this achievement with you, as it wouldn’t have been possible, had it not been for all of your contributions and support as authors, reviewers, editors and readers. A huge ‘THANK YOU’ goes to all of you!

What bullshit.

Anyway, on to the results:

Read the rest of this entry »





Rank your own sample of journals

29 12 2020

If you follow my blog regularly, you’ll know that around the middle of each year I publish a list of journals in conservation and ecology ranked according to a multi-index algorithm we developed back in 2016. The rank I release coincides with the release of the Web of Knowledge Impact Factors, various Scopus indices, and the Google Scholar journal ranks.

The reasons we developed a multi-index rank are many (summarised here), but they essentially boil down to the following rationale:

(i) No single existing index is without its own faults; (ii) ranks are only really meaningful when expressed on a relative scale; and (iii) different disciplines have wildly different index values, so generally disciplines aren’t easily compared.

That’s why I made the R code available to anyone wishing to reproduce their own ranked sample of journals. However, given that implementing the R code takes a bit of know-how, I decided to apply my new-found addiction to R Shiny to create (yet another) app.

Welcome to the JournalRankShiny app.

This new app takes a pre-defined list of journals and the required indices, and does the resampled ranking for you based on a few input parameters that you can set. It also provides a few nice graphs for the ranks (and their uncertainties), as well as a plot showing the relationship between the resulting ranks and the journal’s Impact Factor (for comparison).

Read the rest of this entry »




Collect and analyse your Altmetric data

17 11 2020

Last week I reported that I had finally delved into the world of R Shiny to create an app that calculates relative citation-based ranks for researchers.

I’m almost slightly embarrassed to say that Shiny was so addictive that I ended up making another app.

This new app takes any list of user-supplied digital object identifiers (doi) and fetches their Altmetric data for you.

Why might you be interested in a paper’s Altmetric data? Citations are only one measure of an article’s impact on the research community, whereas Altmetrics tend to indicate the penetration of the article’s findings to a much broader audience.

Altmetric is probably the leading way to gauge the ‘impact’ (attention) an article has commanded across all online sources, including news articles, tweets, Facebook entries, blogs, Wikipedia mentions and others.

And for those of us interested in influencing policy with our work, Altmetrics also collate citations arising from policy documents.

Read the rest of this entry »




The ε-index app: a fairer way to rank researchers with citation data

9 11 2020

Back in April I blogged about an idea I had to provide a more discipline-, gender-, and career stage-balanced way of ranking researchers using citation data.

Most of you are of course aware of the ubiquitous h-index, and its experience-corrected variant, the m-quotient (h-index ÷ years publishing), but I expect that you haven’t heard of the battery of other citation-based indices on offer that attempt to correct various flaws in the h-index. While many of them are major improvements, almost no one uses them.

Why aren’t they used? Most likely because they aren’t easy to calculate, or require trawling through both open-access and/or subscription-based databases to get the information necessary to calculate them.

Hence, the h-index still rules, despite its many flaws, like under-emphasising a researcher’s entire body of work, gender biases, and weighting towards people who have been at it longer. The h-index is also provided free of charge by Google Scholar, so it’s the easiest metric to default to.

So, how does one correct for at least some of these biases while still being able to calculate an index quickly? I think we have the answer.

Since that blog post back in April, a team of seven scientists and I from eight different science disciplines (archaeology, chemistry, ecology, evolution & development, geology, microbiology, ophthalmology, and palaeontology) refined the technique I reported back then, and have submitted a paper describing how what we call the ‘ε-index’ (epsilon index) performs.

Read the rest of this entry »




Grand Challenges in Global Biodiversity Threats

8 10 2020

Last week I mentioned that the new journal Frontiers in Conservation Science is now open for business. As promised, I wrote a short article outlining our vision for the Global Biodiversity Threats section of the journal. It’s open-access, of course, so I’m also copying here on ConservationBytes.com.


Most conservation research and its applications tend to happen most frequently at reasonably fine spatial and temporal scales — for example, mesocosm experiments, single-species population viability analyses, recovery plans, patch-level restoration approaches, site-specific biodiversity surveys, et cetera. Yet, at the other end of the scale spectrum, there have been many overviews of biodiversity loss and degradation, accompanied by the development of multinational policy recommendations to encourage more sustainable decision making at lower levels of sovereign governance (e.g., national, subnational).

Yet truly global research in conservation science is fact comparatively rare, as poignantly demonstrated by the debates surrounding the evidence for and measurement of planetary tipping points (Barnosky et al., 2012; Brook et al., 2013; Lenton, 2013). Apart from the planetary scale of human-driven disruption to Earth’s climate system (Lenton, 2011), both scientific evidence and policy levers tend to be applied most often at finer, more tractable research and administrative scales. But as the massive ecological footprint of humanity has grown exponentially over the last century (footprintnetwork.org), robust, truly global-scale evidence of our damage to the biosphere is now starting to emerge (Díaz et al., 2019). Consequently, our responses to these planet-wide phenomena must also become more global in scope.

Conservation scientists are adept at chronicling patterns and trends — from the thousands of vertebrate surveys indicating an average reduction of 68% in the numbers of individuals in populations since the 1970s (WWF, 2020), to global estimates of modern extinction rates (Ceballos and Ehrlich, 2002; Pimm et al., 2014; Ceballos et al., 2015; Ceballos et al., 2017), future models of co-extinction cascades (Strona and Bradshaw, 2018), the negative consequences of invasive species across the planet (Simberloff et al., 2013; Diagne et al., 2020), discussions surrounding the evidence for the collapse of insect populations (Goulson, 2019; Komonen et al., 2019; Sánchez-Bayo and Wyckhuys, 2019; Cardoso et al., 2020; Crossley et al., 2020), the threats to soil biodiversity (Orgiazzi et al., 2016), and the ubiquity of plastic pollution (Beaumont et al., 2019) and other toxic substances (Cribb, 2014), to name only some of the major themes in global conservation. 

Read the rest of this entry »




New journal: Frontiers in Conservation Science

29 09 2020

Several months ago, Daniel Blumstein of UCLA approached me with an offer — fancy leading a Special Section in a new Frontiers journal dedicated to conservation science?

I admit that my gut reaction was a visceral ‘no’, both in terms of the extra time it would require, as well as my autonomous reflex of ‘not another journal, please‘.

I had, for example, spent a good deal of blood, sweat, and tears helping to launch Conservation Letters when I acted as Senior Editor for the first 3.5 years of its existence (I can’t believe that it has been nearly a decade since I left the journal). While certainly an educational and reputational boost, I can’t claim that the experience was always a pleasant one — as has been said many times before, the fastest way to make enemies is to become an editor.

But then Dan explained what he had in mind for Frontiers in Conservation Science, and the more I spoke with him, the more I started to think that it wasn’t a bad idea after all for me to join.

Read the rest of this entry »





Journal ranks 2019

8 07 2020

journalstack_16x9

For the last 12 years and running, I’ve been generating journal ranks based on the journal-ranking method we published several years ago. Since the Google journal h-indices were just released, here are the new 2019 ranks for: (i) 99 ecology, conservation and multidisciplinary journals, and a subset of (ii) 61 ‘ecology’ journals, (iii) 27 ‘conservation’ journals, (iv) 41 ‘sustainability’ journals (with general and energy-focussed journals included), and (v) 20 ‘marine & freshwater’ journals.

See also the previous years’ rankings (2018, 20172016201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





A fairer way to rank a researcher’s relative citation performance?

23 04 2020

runningI do a lot of grant assessments for various funding agencies, including two years on the Royal Society of New Zealand’s Marsden Fund Panel (Ecology, Evolution, and Behaviour), and currently as an Australian Research Council College Expert (not to mention assessing a heap of other grant applications).

Sometimes this means I have to read hundreds of proposals made up of even more researchers, all of whom I’m meant to assess for their scientific performance over a short period of time (sometimes only within a few weeks). It’s a hard job, and I doubt very much that there’s a completely fair way to rank a researcher’s ‘performance’ quickly and efficiently.

It’s for this reason that I’ve tried to find ways to rank people in the most objective way possible. This of course does not discount reading a person’s full CV and profile, and certainly taking into consideration career breaks, opportunities, and other extenuating circumstances. But I’ve tended to do a first pass based primarily on citation indices, and then adjust those according to the extenuating circumstances.

But the ‘first pass’ part of the equation has always bothered me. We know that different fields have different rates of citation accumulation, that citations accumulate with age (including the much heralded h-index), and that there are gender (and other) biases in citations that aren’t easily corrected.

I’ve generally relied on the ‘m-index’, which is simply one’s h-index divided by the number of years one has been publishing. While this acts as a sort of age correction, it’s still unsatisfactory, essentially because I’ve noticed that it tends to penalise early career researchers in particular. I’ve tried to account for this by comparing people roughly within the same phase of career, but it’s still a subjective exercise.

I’ve recently been playing with an alternative that I think might be a way forward. Bear with me here, for it takes a bit of explaining. Read the rest of this entry »





Does high exposure on social and traditional media lead to more citations?

18 12 2019

social mediaOne of the things that I’ve often wondered about is whether making the effort to spread your scientific article’s message as far and wide as possible on social media actually brings you more citations.

While there’s more than enough justification to promote your work widely for non-academic purposes, there is some doubt as to whether the effort reaps academic awards as well.

Back in 2011 (the Pleistocene of social media in science), Gunther Eysenbach examined 286 articles in the obscure Journal of Medical Internet Research, finding that yes, highly cited papers did indeed have more tweets. But he concluded:

Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations …

Subsequent work has established similar positive relationships between social-media exposure and citation rates (e.g., for 208739 PubMed articles> 10000 blog posts of articles published in > 20 journals), weak relationships (e.g., using 27856 PLoS One articlesbased on 1380143 articles from PubMed in 2013), or none at all (e.g., for 130 papers in International Journal of Public Health).

While the research available suggests that, on average, the more social-media exposure a paper gets, the more likely it is to be cited, the potential confounding problem raised by Eysenbach remains — are interesting papers that command a lot of social-media attention also those that would garner scientific interest anyway? In other words, are popular papers just popular in both realms, meaning that such papers are going to achieve high citation rates anyway?

Read the rest of this entry »





Academic? You’re just a cash-hamster spinning a publisher’s profit wheel

9 09 2019

mindslaveI contend that publishing articles in nearly all peer-reviewed journals amounts to a form of intellectual slavery.

I defend my use of the word ‘slavery’ here, for how else would you describe a business where the product (scientific results) is produced by others (scientists) for free, is assessed for quality by others (reviewers) for free, is commissioned, overviewed and selected by yet others (editors) for free, and then sold back to the very same scientists and the rest of the world’s knowledge consumers at exorbitant prices? To make matters worse, most scientists have absolutely no idea how much their institutions pay for these subscriptions, so there is little consumer scrutiny passed from researcher to administrator. In 2015, Jason Schmitt of Clarkson University in Potsdam, New York quoted Brian Nosek, Director of the Center for Open Science, to sum up the situation:

“Academic publishing is the perfect business model to make a lot of money. You have the producer and consumer as the same person: the researcher. And the researcher has no idea how much anything costs. I, as the researcher, produce the scholarship and I want it to have the biggest impact possible and so what I care about is the prestige of the journal and how many people read it. Once it is finally accepted, since it is so hard to get acceptances, I am so delighted that I will sign anything  —  send me a form and I will sign it. I have no idea I have signed over my copyright or what implications that has — nor do I care, because it has no impact on me. The reward is the publication.”

Some journals go even beyond this sort of profiteering and also inflict ‘page charges’ of hundreds to thousands of US dollars on the authors for the privilege of having their work appear in that journal.

I am not just grumpy about what many might assume to be a specialised and irrelevant sector of the economy, because it is in fact an industry worth many billions of dollars annually. In fact, one of the biggest corporations, Reed-Elsevier*, made over £1.8 billion (nearly US$2.8 billion) in adjusted operating profit in 2015 (1). Other major publishing companies like Wiley-Blackwell, Springer, Taylor & Francis, and Sage Publications, which with Reed-Elsevier collectively published more than half of all the academic papers published in 2013, make many billions in profit each year as well: Wiley-Blackwell took in US$965 million in revenue in 2016, Springer had a 2012 revenue of US$1.26 billion, and Sage Publications had a 2015 profit of $585 million. Read the rest of this entry »





Journal ranks 2018

23 07 2019

journal stacks

As has become my custom (11 years and running), and based on the journal-ranking method we published several years ago, here are the new 2018 ranks for (i) 90 ecology, conservation and multidisciplinary journals, and a subset of (ii) 56 ‘ecology’ journals, and (iii) 26 ‘conservation’ journals. I’ve also included two other categories — (iv) 40 ‘sustainability’ journals (with general and energy-focussed journals included), and 19 ‘marine & freshwater’ journals for the watery types.

See also the previous years’ rankings (20172016201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





Good English and the scientific career: hurdles for non-native English speakers

13 02 2019

New post from Frédérik Saltré originally presented on the GE.blog.


It’s no secret that to be successful in academia, it’s not enough just to be a good scientist — being able to formulate and test hypotheses. You also need to be able to communicate that science effectively.

This implies a good command of the English language for anyone who wants a career in science. Mastering English (or not) will directly affect your work opportunities such as publishing, establishing networks at conferences, taking leadership of working groups, contributing to lab meetings (there is nothing worse than feeling left out of a conversation because of language limitations), and so forth.

But when it comes to language skills, not everyone is created equal because those skills mostly depend on a person’s background (e.g., learning English as a child or later in life), cultural reluctance, fear of making mistakes, lack of confidence, or simply brain design — this last component might offend some, but it appears that some people just happen to have the specific neuronal pathways to learn languages better than others. Whatever the reason, the process of becoming a good scientist is made more difficult if you happen not to have that specific set of neuronal pathways, even though not being a native English speaker does not prevent from being academically successful.

Read the rest of this entry »




Journal ranks 2017

27 08 2018

book-piles

A few years ago we wrote a bibliometric paper describing a new way to rank journals, and I still think it is one of the better ways to rank them based on a composite index of relative citation-based metrics . I apologise for taking so long to do the analysis this year, but it took Google Scholar a while to post their 2017 data.

So, here are the 2017 ranks for (i) 88 ecology, conservation and multidisciplinary journals, and a subset of (ii) 55 ‘ecology’ journals, (iii) 24 ‘conservation’ journals. Also this year, I’ve included two new categories — (iv) 38 ‘sustainability’ journals (with general and energy-focussed journals included), and 19 ‘marine & freshwater’ journals for you watery types.

See also the previous years’ rankings (2016201520142013, 2012, 20112010, 2009, 2008).

Read the rest of this entry »





Why do they take so long?

4 05 2018

phd1This is probably more of an act of self-therapy on a Friday afternoon to alleviate some frustration, but it is an important issue all the same.

An Open Letter to academic publishers:

Why, oh why, do some of you take so bloody long to publish our papers online after acceptance?

I have been known to complain about how the general academic-publishing industry makes sickening amount of profit on the backs of our essentially free labour, and I suppose this is just another whinge along those lines. Should it take weeks to months to publish our papers online once they are accepted?

No. it shouldn’t.

I’m fully aware that most publishing companies these days outsource the actual publishing side of things to subcontracting agencies (and I’ve noticed more and more that these tend to be in developing nations, probably because the labour is cheaper), and that it can take someone some time to work through the backlog of Word or Latex documents and produce nice, polished PDFs. Read the rest of this entry »





Prioritising your academic tasks

18 04 2018

The following is an abridged version of one of the chapters in my recent book, The Effective Scientist, regarding how to prioritise your tasks in academia. For a more complete treatise of the issue, access the full book here.

splitting tasks

Splitting tasks. © René Campbell renecampbellart.com

How the hell do you balance all the requirements of an academic life in science? From actually doing the science, analysing the data, writing papers, reviewing, writing grants, to mentoring students — not to mention trying to have a modicum of a life outside of the lab — you can quickly end up feeling a little daunted. While there is no empirical formula that make you run your academic life efficiently all the time, I can offer a few suggestions that might make your life just a little less chaotic.

Priority 1: Revise articles submitted to high-ranked journals

Barring a family emergency, my top priority is always revising an article that has been sent back to me from a high-ranking journal for revisions. Spend the necessary time to complete the necessary revisions.

Priority 2: Revise articles submitted to lower-ranked journals

I could have lumped this priority with the previous, but I think it is necessary to distinguish the two should you find yourself in the fortunate position of having to do more than one revision at a time.

Priority 3: Experimentation and field work

Most of us need data before we can write papers, so this is high on my personal priority list. If field work is required, then obviously this will be your dominant preoccupation for sometimes extended periods. Many experiments can also be highly time-consuming, while others can be done in stages or run in the background while you complete other tasks.

Priority 4: Databasing

This one could be easily forgotten, but it is a task that can take up a disproportionate amount of your time if do not deliberately fit it into your schedule. Well-organised, abundantly meta-tagged, intuitive, and backed-up databases are essential for effective scientific analysis; good data are useless if you cannot find them or understand to what they refer. Read the rest of this entry »