Two new postdoctoral positions in ecological network & vegetation modelling announced

21 07 2017

19420366_123493528240028_621031473222812853_n

With the official start of the new ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH) in July, I am pleased to announce two new CABAH-funded postdoctoral positions (a.k.a. Research Associates) in my global ecology lab at Flinders University in Adelaide (Flinders Modelling Node).

One of these positions is a little different, and represents something of an experiment. The Research Associate in Palaeo-Vegetation Modelling is being restricted to women candidates; in other words, we’re only accepting applications from women for this one. In a quest to improve the gender balance in my lab and in universities in general, this is a step in the right direction.

The project itself is not overly prescribed, but we would like something along the following lines of inquiry: Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015
Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ led by Alan Cooper of the Australian Centre for Ancient DNA and Chris Turney of the UNSW Climate Change Research Centre demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





School finishers and undergraduates ill-prepared for research careers

22 05 2014

bad mathsHaving been for years now at the pointy end of the educational pathway training the next generation of scientists, I’d like to share some of my observations regarding how well we’re doing. At least in Australia, my realistic assessment of science education is: not well at all.

I’ve been thinking about this for some time, but only now decided to put my thoughts into words as the train wreck of our current government lurches toward a future guaranteeing an even stupider society. Charging postgraduate students to do PhDs for the first time, encouraging a US-style system of wealth-based educational privilege, slashing education budgets and de-investing in science while promoting the belief in invisible spaghetti monsters from space, are all the latest in the Fiberal future nightmare that will change our motto to “Australia – the stupid country”.

As you can appreciate, I’m not filled with a lot of hope that the worrying trends I’ve observed over the past 10 years or so are going to get any better any time soon. To be fair though, the problems go beyond the latest stupidities of the Fiberal government.

My realisation that there was a problem has crystallised only recently as I began to notice that most of my lab members were not Australian. In fact, the percentage of Australian PhD students and post-doctoral fellows in the lab usually hovers around 20%. Another sign of a problem was that even when we advertised for several well-paid postdoctoral positions, not a single Australian made the interview list (in fact, few Australians applied at all). I’ve also talked to many of my colleagues around Australia in the field of quantitative ecology, and many lament the same general trend.

Is it just poor mathematical training? Yes and no. Australian universities have generally lowered their entry-level requirements for basic maths, thereby perpetuating the already poor skill base of school leavers. Why? Bums (that pay) on seats. This means that people like me struggle to find Australian candidates that can do the quantitative research we need done. We are therefore forced to look overseas. Read the rest of this entry »





Putting the ‘science’ in citizen science

30 04 2014
How to tell if a koala has been in your garden. © Great Koala Count

How to tell if a koala has been in your garden. © Great Koala Count

When I was in Finland last year, I had the pleasure of meeting Tomas Roslin and hearing him describe his Finland-wide citizen-science project on dung beetles. What impressed me most was that it completely flipped my general opinion about citizen science and showed me that the process can be useful.

I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »





Cleaning up the rubbish: Australian megafauna extinctions

15 11 2013

diprotodonA few weeks ago I wrote a post about how to run the perfect scientific workshop, which most of you thought was a good set of tips (bizarrely, one person was quite upset with the message; I saved him the embarrassment of looking stupid online and refrained from publishing his comment).

As I mentioned at the end of post, the stimulus for the topic was a particularly wonderful workshop 12 of us attended at beautiful Linnaeus Estate on the northern coast of New South Wales (see Point 5 in the ‘workshop tips’ post).

But why did a group of ecological modellers (me, Barry Brook, Salvador Herrando-Pérez, Fréd Saltré, Chris Johnson, Nick Beeton), ancient DNA specialists (Alan Cooper), palaeontologists (Gav Prideaux), fossil dating specialists (Dizzy Gillespie, Bert Roberts, Zenobia Jacobs) and palaeo-climatologists (Michael Bird, Chris Turney [in absentia]) get together in the first place? Hint: it wasn’t just the for the beautiful beach and good wine.

I hate to say it – mainly because it deserves as little attention as possible – but the main reason is that we needed to clean up a bit of rubbish. The rubbish in question being the latest bit of excrescence growing on that accumulating heap produced by a certain team of palaeontologists promulgating their ‘it’s all about the climate or nothing’ broken record.

Read the rest of this entry »





Biogeography comes of age

22 08 2013

penguin biogeographyThis week has been all about biogeography for me. While I wouldn’t call myself a ‘biogeographer’, I certainly do apply a lot of the discipline’s techniques.

This week I’m attending the 2013 Association of Ecology’s (INTECOL) and British Ecological Society’s joint Congress of Ecology in London, and I have purposefully sought out more of the biogeographical talks than pretty much anything else because the speakers were engaging and the topics fascinating. As it happens, even my own presentation had a strong biogeographical flavour this year.

Although the species-area relationship (SAR) is only one small aspect of biogeography, I’ve been slightly amazed that after more than 50 years since MacArthur & Wilson’s famous book, our discipline is still obsessed with SAR.

I’ve blogged about SAR issues before – what makes it so engaging and controversial is that SAR is the principal tool to estimate overall extinction rates, even though it is perhaps one of the bluntest tools in the ecological toolbox. I suppose its popularity stems from its superficial simplicity – as the area of an (classically oceanic) island increases, so too does the total number of species it can hold. The controversies surrounding such as basic relationship centre on describing the rate of that species richness increase with area – in other words, just how nonlinear the SAR itself is.

Even a cursory understanding of maths reveals the importance of estimating this curve correctly. As the area of an ‘island’ (habitat fragment) decreases due to human disturbance, estimating how many species end up going extinct as a result depends entirely on the shape of the SAR. Get the SAR wrong, and you can over- or under-estimate the extinction rate. This was the crux of the palaver over Fangliang He (not attending INTECOL) & Stephen Hubbell’s (attending INTECOL) paper in Nature in 2011.

The first real engagement of SAR happened with John Harte’s maximum entropy talk in the process macroecology session on Tuesday. What was notable to me was his adamant claim that the power-law form of SAR should never be used, despite its commonness in the literature. I took this with a grain of salt because I know all about how messy area-richness data can be, and why one needs to consider alternate models (see an example here). But then yesterday I listened to one of the greats of biogeography – Robert Whittaker – who said pretty much the complete opposite of Harte’s contention. Whittaker showed results from one of his papers last year that the power law was in fact the most commonly supported SAR among many datasets (granted, there was substantial variability in overall model performance). My conclusion remains firm – make sure you use multiple models for each individual dataset and try to infer the SAR from model-averaging. Read the rest of this entry »