Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





We’re sorry, but 50/500 is still too few

28 01 2014

too fewSome of you who are familiar with my colleagues’ and my work will know that we have been investigating the minimum viable population size concept for years (see references at the end of this post). Little did I know when I started this line of scientific inquiry that it would end up creating more than a few adversaries.

It might be a philosophical perspective that people adopt when refusing to believe that there is any such thing as a ‘minimum’ number of individuals in a population required to guarantee a high (i.e., almost assured) probability of persistence. I’m not sure. For whatever reason though, there have been some fierce opponents to the concept, or any application of it.

Yet a sizeable chunk of quantitative conservation ecology develops – in various forms – population viability analyses to estimate the probability that a population (or entire species) will go extinct. When the probability is unacceptably high, then various management approaches can be employed (and modelled) to improve the population’s fate. The flip side of such an analysis is, of course, seeing at what population size the probability of extinction becomes negligible.

‘Negligible’ is a subjective term in itself, just like the word ‘very‘ can mean different things to different people. This is why we looked into standardising the criteria for ‘negligible’ for minimum viable population sizes, almost exactly what the near universally accepted IUCN Red List attempts to do with its various (categorical) extinction risk categories.

But most reasonable people are likely to agree that < 1 % chance of going extinct over many generations (40, in the case of our suggestion) is an acceptable target. I’d feel pretty safe personally if my own family’s probability of surviving was > 99 % over the next 40 generations.

Some people, however, baulk at the notion of making generalisations in ecology (funny – I was always under the impression that was exactly what we were supposed to be doing as scientists – finding how things worked in most situations, such that the mechanisms become clearer and clearer – call me a dreamer).

So when we were attacked in several high-profile journals, it came as something of a surprise. The latest lashing came in the form of a Trends in Ecology and Evolution article. We wrote a (necessarily short) response to that article, identifying its inaccuracies and contradictions, but we were unable to expand completely on the inadequacies of that article. However, I’m happy to say that now we have, and we have expanded our commentary on that paper into a broader review. Read the rest of this entry »





Software tools for conservation biologists

8 04 2013

computer-programmingGiven the popularity of certain prescriptive posts on ConservationBytes.com, I thought it prudent to compile a list of software that my lab and I have found particularly useful over the years. This list is not meant to be comprehensive, but it will give you a taste for what’s out there. I don’t list the plethora of conservation genetics software that is available (generally given my lack of experience with it), but if this is your chosen area, I’d suggest starting with Dick Frankham‘s excellent book, An Introduction to Conservation Genetics.

1. R: If you haven’t yet loaded the open-source R programming language on your machine, do it now. It is the single-most-useful bit of statistical and programming software available to anyone anywhere in the sciences. Don’t worry if you’re not a fully fledged programmer – there are now enough people using and developing sophisticated ‘libraries’ (packages of functions) that there’s pretty much an application for everything these days. We tend to use R to the exclusion of almost any other statistical software because it makes you learn the technique rather than just blindly pressing the ‘go’ button. You could also stop right here – with R, you can do pretty much everything else that the software listed below does; however, you have to be an exceedingly clever programmer and have a lot of spare time. R can also sometimes get bogged down with too much filled RAM, in which case other, compiled languages such as PYTHON and C# are useful.

2. VORTEX/OUTBREAK/META-MODEL MANAGER, etc.: This suite of individual-based projection software was designed by Bob Lacy & Phil Miller initially to determine the viability of small (usually captive) populations. The original VORTEX has grown into a multi-purpose, powerful and sophisticated population viability analysis package that now links to its cousin applications like OUTBREAK (the only off-the-shelf epidemiological software in existence) via the ‘command centre’ META-MODEL MANAGER (see an examples here and here from our lab). There are other add-ons that make almost any population projection and hindcasting application possible. And it’s all free! (warning: currently unavailable for Mac, although I’ve been pestering Bob to do a Mac version).

3. RAMAS: RAMAS is the go-to application for spatial population modelling. Developed by the extremely clever Resit Akçakaya, this is one of the only tools that incorporates spatial meta-population aspects with formal, cohort-based demographic models. It’s also very useful in a climate-change context when you have projections of changing habitat suitability as the base layer onto which meta-population dynamics can be modelled. It’s not free, but it’s worth purchasing. Read the rest of this entry »





Science immortalised in cartoon

1 02 2013

Well, this is a first for me (us).

I’ve never had a paper of ours turned into a cartoon. The illustrious and brilliant ‘First Dog on the Moon‘ (a.k.a. Andrew Marlton) who is chief cartoonist for Australia’s irreverent ‘Crikey‘ online news magazine just parodied our Journal of Animal Ecology paper No need for disease: testing extinction hypotheses for the thylacine using multispecies metamodels that I wrote about a last month here on ConservationBytes.com.

Needless to say, I’m chuffed as a chuffed thing.

Enjoy!

Stripey





Conservation catastrophes

22 02 2012

David Reed

The title of this post serves two functions: (1) to introduce the concept of ecological catastrophes in population viability modelling, and (2) to acknowledge the passing of the bloke who came up with a clever way of dealing with that uncertainty.

I’ll start with latter first. It came to my attention late last year that a fellow conservation biologist colleague, Dr. David Reed, died unexpectedly from congestive heart failure. I did not really mourn his passing, for I had never met him in person (I believe it is disingenuous, discourteous, and slightly egocentric to mourn someone who you do not really know personally – but that’s just my opinion), but I did think at the time that the conservation community had lost another clever progenitor of good conservation science. As many CB readers already know, we lost a great conservation thinker and doer last year, Professor Navjot Sodhi (and that, I did take personally). Coincidentally, both Navjot and David died at about the same age (49 and 48, respectively). I hope that the being in one’s late 40s isn’t particularly presaged for people in my line of business!

My friend, colleague and lab co-director, Professor Barry Brook, did, however, work a little with David, and together they published some pretty cool stuff (see References below). David was particularly good at looking for cross-taxa generalities in conservation phenomena, such as minimum viable population sizes, effects of inbreeding depression, applications of population viability analysis and extinction risk. But more on some of that below. Read the rest of this entry »








%d bloggers like this: