Prioritising your academic tasks

18 04 2018

The following is an abridged version of one of the chapters in my recent book, The Effective Scientist, regarding how to prioritise your tasks in academia. For a more complete treatise of the issue, access the full book here.

splitting tasks

Splitting tasks. © René Campbell renecampbellart.com

How the hell do you balance all the requirements of an academic life in science? From actually doing the science, analysing the data, writing papers, reviewing, writing grants, to mentoring students — not to mention trying to have a modicum of a life outside of the lab — you can quickly end up feeling a little daunted. While there is no empirical formula that make you run your academic life efficiently all the time, I can offer a few suggestions that might make your life just a little less chaotic.

Priority 1: Revise articles submitted to high-ranked journals

Barring a family emergency, my top priority is always revising an article that has been sent back to me from a high-ranking journal for revisions. Spend the necessary time to complete the necessary revisions.

Priority 2: Revise articles submitted to lower-ranked journals

I could have lumped this priority with the previous, but I think it is necessary to distinguish the two should you find yourself in the fortunate position of having to do more than one revision at a time.

Priority 3: Experimentation and field work

Most of us need data before we can write papers, so this is high on my personal priority list. If field work is required, then obviously this will be your dominant preoccupation for sometimes extended periods. Many experiments can also be highly time-consuming, while others can be done in stages or run in the background while you complete other tasks.

Priority 4: Databasing

This one could be easily forgotten, but it is a task that can take up a disproportionate amount of your time if do not deliberately fit it into your schedule. Well-organised, abundantly meta-tagged, intuitive, and backed-up databases are essential for effective scientific analysis; good data are useless if you cannot find them or understand to what they refer. Read the rest of this entry »





The Effective Scientist

22 03 2018

final coverWhat is an effective scientist?

The more I have tried to answer this question, the more it has eluded me. Before I even venture an attempt, it is necessary to distinguish the more esoteric term ‘effective’ from the more pedestrian term ‘success’. Even ‘success’ can be defined and quantified in many different ways. Is the most successful scientist the one who publishes the most papers, gains the most citations, earns the most grant money, gives the most keynote addresses, lectures the most undergraduate students, supervises the most PhD students, appears on the most television shows, or the one whose results improves the most lives? The unfortunate and wholly unsatisfying answer to each of those components is ‘yes’, but neither is the answer restricted to the superlative of any one of those. What I mean here is that you need to do reasonably well (i.e., relative to your peers, at any rate) in most of these things if you want to be considered ‘successful’. The relative contribution of your performance in these components will vary from person to person, and from discipline to discipline, but most undeniably ‘successful’ scientists do well in many or most of these areas.

That’s the opening paragraph for my new book that has finally been release for sale today in the United Kingdom and Europe (the Australasian release is scheduled for 7 April, and 30 April for North America). Published by Cambridge University Press, The Effective ScientistA Handy Guide to a Successful Academic Career is the culmination of many years of work on all the things an academic scientist today needs to know, but was never taught formally.

Several people have asked me why I decided to write this book, so a little history of its genesis is in order. I suppose my over-arching drive was to create something that I sincerely wish had existed when I was a young scientist just starting out on the academic career path. I was focussed on learning my science, and didn’t necessarily have any formal instruction in all the other varied duties I’d eventually be expected to do well, from how to write papers efficiently, to how to review properly, how to manage my grant money, how to organise and store my data, how to run a lab smoothly, how to get the most out of a conference, how to deal with the media, to how to engage in social media effectively (even though the latter didn’t really exist yet at the time) — all of these so-called ‘extra-curricular’ activities associated with an academic career were things I would eventually just have to learn as I went along. I’m sure you’ll agree, there has to be a better way than just muddling through one’s career picking up haphazard experience. Read the rest of this entry »





Two new postdoctoral positions in ecological network & vegetation modelling announced

21 07 2017

19420366_123493528240028_621031473222812853_n

With the official start of the new ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH) in July, I am pleased to announce two new CABAH-funded postdoctoral positions (a.k.a. Research Associates) in my global ecology lab at Flinders University in Adelaide (Flinders Modelling Node).

One of these positions is a little different, and represents something of an experiment. The Research Associate in Palaeo-Vegetation Modelling is being restricted to women candidates; in other words, we’re only accepting applications from women for this one. In a quest to improve the gender balance in my lab and in universities in general, this is a step in the right direction.

The project itself is not overly prescribed, but we would like something along the following lines of inquiry: Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015
Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ led by Alan Cooper of the Australian Centre for Ancient DNA and Chris Turney of the UNSW Climate Change Research Centre demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





School finishers and undergraduates ill-prepared for research careers

22 05 2014

bad mathsHaving been for years now at the pointy end of the educational pathway training the next generation of scientists, I’d like to share some of my observations regarding how well we’re doing. At least in Australia, my realistic assessment of science education is: not well at all.

I’ve been thinking about this for some time, but only now decided to put my thoughts into words as the train wreck of our current government lurches toward a future guaranteeing an even stupider society. Charging postgraduate students to do PhDs for the first time, encouraging a US-style system of wealth-based educational privilege, slashing education budgets and de-investing in science while promoting the belief in invisible spaghetti monsters from space, are all the latest in the Fiberal future nightmare that will change our motto to “Australia – the stupid country”.

As you can appreciate, I’m not filled with a lot of hope that the worrying trends I’ve observed over the past 10 years or so are going to get any better any time soon. To be fair though, the problems go beyond the latest stupidities of the Fiberal government.

My realisation that there was a problem has crystallised only recently as I began to notice that most of my lab members were not Australian. In fact, the percentage of Australian PhD students and post-doctoral fellows in the lab usually hovers around 20%. Another sign of a problem was that even when we advertised for several well-paid postdoctoral positions, not a single Australian made the interview list (in fact, few Australians applied at all). I’ve also talked to many of my colleagues around Australia in the field of quantitative ecology, and many lament the same general trend.

Is it just poor mathematical training? Yes and no. Australian universities have generally lowered their entry-level requirements for basic maths, thereby perpetuating the already poor skill base of school leavers. Why? Bums (that pay) on seats. This means that people like me struggle to find Australian candidates that can do the quantitative research we need done. We are therefore forced to look overseas. Read the rest of this entry »





Putting the ‘science’ in citizen science

30 04 2014
How to tell if a koala has been in your garden. © Great Koala Count

How to tell if a koala has been in your garden. © Great Koala Count

When I was in Finland last year, I had the pleasure of meeting Tomas Roslin and hearing him describe his Finland-wide citizen-science project on dung beetles. What impressed me most was that it completely flipped my general opinion about citizen science and showed me that the process can be useful.

I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »