Two new postdoctoral positions in ecological network & vegetation modelling announced

21 07 2017


With the official start of the new ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH) in July, I am pleased to announce two new CABAH-funded postdoctoral positions (a.k.a. Research Associates) in my global ecology lab at Flinders University in Adelaide (Flinders Modelling Node).

One of these positions is a little different, and represents something of an experiment. The Research Associate in Palaeo-Vegetation Modelling is being restricted to women candidates; in other words, we’re only accepting applications from women for this one. In a quest to improve the gender balance in my lab and in universities in general, this is a step in the right direction.

The project itself is not overly prescribed, but we would like something along the following lines of inquiry: Read the rest of this entry »

Sensitive numbers

22 03 2016

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »

Outright bans of trophy hunting could do more harm than good

5 01 2016

In July 2015 an American dentist shot and killed a male lion called ‘Cecil’ with a hunting bow and arrow, an act that sparked a storm of social media outrage. Cecil was a favourite of tourists visiting Hwange National Park in Zimbabwe, and so the allegation that he was lured out of the Park to neighbouring farmland added considerable fuel to the flames of condemnation. Several other aspects of the hunt, such as baiting close to national park boundaries, were allegedly done illegally and against the spirit and ethical norms of a managed trophy hunt.

In May 2015, a Texan legally shot a critically endangered black rhino in Namibia, which also generated considerable online ire. The backlash ensued even though the male rhino was considered ‘surplus’ to Namibia’s black rhino populations, and the US$350,000 generated from the managed hunt was to be re-invested in conservation. Together, these two incidents have triggered vociferous appeals to ban trophy hunting throughout Africa.

These highly politicized events are but a small component of a large industry in Africa worth > US$215 million per year that ‘sells’ iconic animals to (mainly foreign) hunters as a means of generating otherwise scarce funds. While to most people this might seem like an abhorrent way to generate money, we argue in a new paper that sustainable-use activities, such as trophy hunting, can be an important tool in the conservationist’s toolbox. Conserving biodiversity can be expensive, so generating money is a central preoccupation of many environmental NGOs, conservation-minded individuals, government agencies and scientists. Making money for conservation in Africa is even more challenging, and so we argue that trophy hunting should and could fill some of that gap. Read the rest of this entry »

Avoiding genetic rescue not justified on genetic grounds

12 03 2015
Genetics to the rescue!

Genetics to the rescue!

I had the pleasure today of reading a new paper by one of the greatest living conservation geneticists, Dick Frankham. As some of CB readers might remember, I’ve also published some papers with Dick over the last few years, with the most recent challenging the very basis for the IUCN Red List category thresholds (i.e., in general, they’re too small).

Dick’s latest paper in Molecular Ecology is a meta-analysis designed to test whether there are any genetic grounds for NOT attempting genetic rescue for inbreeding-depressed populations. I suppose a few definitions are in order here. Genetic rescue is the process, either natural or facilitated, where inbred populations (i.e., in a conservation sense, those comprising too many individuals bonking their close relatives because the population in question is small) receive genes from another population such that their overall genetic diversity increases. In the context of conservation genetics, ‘inbreeding depression‘ simply means reduced biological fitness (fertility, survival, longevity, etc.) resulting from parents being too closely related.

Seems like an important thing to avoid, so why not attempt to facilitate gene flow among populations such that those with inbreeding depression can be ‘rescued’? In applied conservation, there are many reasons given for not attempting genetic rescue: Read the rest of this entry »

We generally ignore the big issues

11 08 2014

I’ve had a good week at Stanford University with Paul Ehrlich where we’ve been putting the final touches1 on our book. It’s been taking a while to put together, but we’re both pretty happy with the result, which should be published by The University of Chicago Press within the first quarter of 2015.

It has indeed been a pleasure and a privilege to work with one of the greatest thinkers of our age, and let me tell you that at 82, he’s still a force with which to be reckoned. While I won’t divulge much of our discussions here given they’ll appear soon-ish in the book, I did want to raise one subject that I think we all need to think about a little more.

The issue is what we, as ecologists (I’m including conservation scientists here), choose to study and contemplate in our professional life.

I’m just as guilty as most of the rest of you, but I argue that our discipline is caught in a rut of irrelevancy on the grander scale. We spend a lot of time refining the basics of what we essentially already know pretty well. While there will be an eternity of processes to understand, species to describe, and relationships to measure, can our discipline really afford to avoid the biggest issues while biodiversity (and our society included) are flushed down the drain?

Read the rest of this entry »

50/500 or 100/1000 debate not about time frame

26 06 2014

Not enough individualsAs you might recall, Dick Frankham, Barry Brook and I recently wrote a review in Biological Conservation challenging the status quo regarding the famous 50/500 ‘rule’ in conservation management (effective population size [Ne] = 50 to avoid inbreeding depression in the short-term, and Ne = 500 to retain the ability to evolve in perpetuity). Well, it inevitably led to some comments arising in the same journal, but we were only permitted by Biological Conservation to respond to one of them. In our opinion, the other comment was just as problematic, and only further muddied the waters, so it too required a response. In a first for me, we have therefore decided to publish our response on the arXiv pre-print server as well as here on

50/500 or 100/1000 debate is not about the time frame – Reply to Rosenfeld

cite as: Frankham, R, Bradshaw CJA, Brook BW. 2014. 50/500 or 100/1000 debate is not about the time frame – Reply to Rosenfeld. arXiv: 1406.6424 [q-bio.PE] 25 June 2014.

The Letter from Rosenfeld (2014) in response to Jamieson and Allendorf (2012) and Frankham et al. (2014) and related papers is misleading in places and requires clarification and correction, as follows: Read the rest of this entry »

We’re sorry, but 50/500 is still too few

28 01 2014

too fewSome of you who are familiar with my colleagues’ and my work will know that we have been investigating the minimum viable population size concept for years (see references at the end of this post). Little did I know when I started this line of scientific inquiry that it would end up creating more than a few adversaries.

It might be a philosophical perspective that people adopt when refusing to believe that there is any such thing as a ‘minimum’ number of individuals in a population required to guarantee a high (i.e., almost assured) probability of persistence. I’m not sure. For whatever reason though, there have been some fierce opponents to the concept, or any application of it.

Yet a sizeable chunk of quantitative conservation ecology develops – in various forms – population viability analyses to estimate the probability that a population (or entire species) will go extinct. When the probability is unacceptably high, then various management approaches can be employed (and modelled) to improve the population’s fate. The flip side of such an analysis is, of course, seeing at what population size the probability of extinction becomes negligible.

‘Negligible’ is a subjective term in itself, just like the word ‘very‘ can mean different things to different people. This is why we looked into standardising the criteria for ‘negligible’ for minimum viable population sizes, almost exactly what the near universally accepted IUCN Red List attempts to do with its various (categorical) extinction risk categories.

But most reasonable people are likely to agree that < 1 % chance of going extinct over many generations (40, in the case of our suggestion) is an acceptable target. I’d feel pretty safe personally if my own family’s probability of surviving was > 99 % over the next 40 generations.

Some people, however, baulk at the notion of making generalisations in ecology (funny – I was always under the impression that was exactly what we were supposed to be doing as scientists – finding how things worked in most situations, such that the mechanisms become clearer and clearer – call me a dreamer).

So when we were attacked in several high-profile journals, it came as something of a surprise. The latest lashing came in the form of a Trends in Ecology and Evolution article. We wrote a (necessarily short) response to that article, identifying its inaccuracies and contradictions, but we were unable to expand completely on the inadequacies of that article. However, I’m happy to say that now we have, and we have expanded our commentary on that paper into a broader review. Read the rest of this entry »