You know you’re screwed when the insects disappear

31 10 2017

dead cicadaLast Friday, ABC 891 here in Adelaide asked me to comment on a conservation paper doing the news rounds last week. While it has been covered extensively in the media (e.g., The Guardian, CNN, and Science), I think it’s probably going to be one of those things that people unfortunately start to forget right away. But this is decidedly something that no one should be forgetting.

While you can listen to me chat about this with the lovely Sonya Feldhoff on the ABC (I start chin-wagging around the 14:30 mark), I thought it prudent to remind CB.com readers just how devastatingly important this study is.

While anyone with a modicum of conservation science under her belt will know that the Earth’s biodiversity is not doing well, the true extent of the ecological tragedy unfolding before our very eyes really came home to us back in 2014 with the publication of WWF’s Living Planet Report. According to a meta-analysis of 10,380 population trends from over 3000 species of birds, reptiles, amphibians, mammals, and fish, the report concluded that the Earth has lost over 50% of the individuals in vertebrate populations since 1970. Subsequent revisions (and more population trends from more species) place the decline at over 60% by 2020 (that’s only a little over two years away). You can also listen to me speak about this on another radio show.

If that little bit of pleasant news didn’t make the pit of your stomach gurgle and a cold sweat break out on the back of your neck, you’re probably not human. But hang on, boys and girls — it gets so much worse! The publication in PLoS One on 18 October about Germany’s insect declines might be enough to tip you over the edge and into the crevasse of mental instabilityRead the rest of this entry »





Four decades of fragmentation

27 09 2017

fragmented

I’ve recently read perhaps the most comprehensive treatise of forest fragmentation research ever compiled, and I personally view this rather readable and succinct review by Bill Laurance and colleagues as something every ecology and conservation student should read.

The ‘Biological Dynamics of Forest Fragments Project‘ (BDFFP) is unquestionably one of the most important landscape-scale experiments ever conceived and implemented, now having run 38 years since its inception in 1979. Indeed, it was way ahead of its time.

Experimental studies in ecology are comparatively rare, namely because it is difficult, expensive, and challenging in the extreme to manipulate entire ecosystems to test specific hypotheses relating to the response of biodiversity to environmental change. Thus, we ecologists tend to rely more on mensurative designs that use existing variation in the landscape (or over time) to infer mechanisms of community change. Of course, such experiments have to be large to be meaningful, which is one reason why the 1000 km2 BDFFP has been so successful as the gold standard for determining the effects of forest fragmentation on biodiversity.

And successful it has been. A quick search for ‘BDFFP’ in the Web of Knowledge database identifies > 40 peer-reviewed articles and a slew of books and book chapters arising from the project, some of which are highly cited classics in conservation ecology (e.g., doi:10.1046/j.1523-1739.2002.01025.x cited > 900 times; doi:10.1073/pnas.2336195100 cited > 200 times; doi:10.1016/j.biocon.2010.09.021 cited > 400 times; and doi:10.1111/j.1461-0248.2009.01294.x cited nearly 600 times). In fact, if we are to claim any ecological ‘laws’ at all, our understanding of fragmentation on biodiversity could be labelled as one of the few, thanks principally to the BDFFP. Read the rest of this entry »





Seeing the wood for the trees

11 07 2016
The Forest Synopsis: Photo of the Anamalai Tiger Reserve, India, by Claire Wordley

The Forest Synopsis: Photo of the Anamalai Tiger Reserve, India, by Claire Wordley

From the towering kapoks of South America to the sprawling banyans of South Asia, from misty cloud forests to ice-covered pines, forests are some of the most diverse and important ecosystems on Earth. However, as conservationists and foresters try to manage, conserve and restore forests across the world, they often rely on scanty and scattered information to inform their decisions, or indeed, no information at all. This could all change.

This week sees the launch of the Forest Synopsis from Conservation Evidence, a free resource collating global scientific evidence on a wide range of conservation-related actions. These aim to include all interventions that conservationists and foresters are likely to use, such as changing fire regimes, legally protecting forests or encouraging seed-dispersing birds into degraded forests.

Making conservation work

“We hear a lot about how important it is to do evidence-based conservation”, says Professor Bill Sutherland at the University of Cambridge, UK, “but in reality getting a handle on what works is not easy. That’s why we set up Conservation Evidence, to break down the barriers between conservationists and the scientific evidence that they need to do their jobs.” Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





To spare or to share, that is a muddled question

9 10 2015
Unfortunately, it ain't this simple (from doi:10.1016/j.foodpol.2010.11.008)

Unfortunately, it ain’t this simple (from doi:10.1016/j.foodpol.2010.11.008)

Certain research trends in any field are inevitable, because once a seductive can of research-question worms is opened, it’s difficult to resist the temptation to start hooking in. Of course, I’m not against popular trends in research per se if they lead to a productive, empirical evaluation of the complexities involved, but it can sometimes result in a lot of wasted time. For example, in conservation ecology we’ve had to suffer 15 years of wasted effort on disproving neutral theory, we’ve bashed heads unnecessarily regarding the infamous SLOSS (‘Single Large Or Several Small’ reserves) debates of the 1970s and 1980s, and we’ve pilfered precious years arguing about whether density feedback actually exists (answer: it does).

The latest populist research trend in conservation seems to be the ‘land sparing versus land sharing’ debate, which, I (and others) argue, is largely an overly simplistic waste of time, money and intellectual advancement to the detriment of both biodiversity and human well-being.

Land sparing is generally used in reference to agricultural practices (although in theory, it could apply to any human endeavour where native vegetation cover is required to be removed or degraded, such as for electricity production) that are purposely made to be high-yielding so that they require the smallest amount of land. At the other extreme (and the ‘two extremes’ of a continuum concept is half the bloody problem here), land sharing requires a larger land footprint because it relies on lower-yielding, biodiversity-friendly (agricultural) practices. Proponents of land sparing argue that only by amalgamating patches of remnant native vegetation can we avoid massive fragmentation and the pursuant loss of biodiversity, whereas those pushing for land sparing argue that the matrix between the big undeveloped bits must be exploited in a more biodiversity-friendly way to allow species to persist.

As it turns out, they’re both right (but their single-minded, extremist positions are not). Read the rest of this entry »