Rich and stable communities most vulnerable to change

16 08 2016

networkI’ve just read an interesting new study that was sent to me by the lead author, Giovanni Strona. Published the other day in Nature Communications, Strona & Lafferty’s article entitled Environmental change makes robust ecological networks fragile describes how ecological communities (≈ networks) become more susceptible to rapid environmental changes depending on how long they’ve had to evolve and develop under stable conditions.

Using the Avida Digital Evolution Platform (a free, open-source scientific software platform for doing virtual experiments with self-replicating and evolving computer programs), they programmed evolving host-parasite pairs in a virtual community to examine how co-extinction rate (i.e., extinctions arising in dependent species — in this case, parasites living off of hosts) varied as a function of the complexity of the interactions between species.

Starting from a single ancestor digital organism, the authors let evolve several artificial life communities for hundred thousands generation under different, stable environmental settings. Such communities included both free-living digital organisms and ‘parasite’ programs capable of stealing their hosts’ memory. Throughout generations, both hosts and parasites diversified, and their interactions became more complex. Read the rest of this entry »

Sensitive numbers

22 03 2016

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »

What makes all that biodiversity possible?

23 09 2015


You can either stop reading now because that’s the answer to the question, or you can continue and find out a little more detail.

I’ve just had an extremely pleasant experience reading John Terborgh‘s latest Perspective in PNAS. You know the kind of paper you read that (a) makes you feel smart, (b) confirms what you already think, yet informs you nonetheless, and (c) doesn’t take three days to digest? That’s one of those.

Toward a trophic theory of species diversity is not only all of those things, it’s also bloody well-written and comes at the question of ‘Why are there so many species on the planet when ecological theory can’t seem to explain how?’ with elegance, style and a lifetime of experience. I just might have to update my essential-ecology-papers list. If I had to introduce someone to 60 years of ecological theory on biodiversity, there’s no better place to start.

Read the rest of this entry »