One-two carbon punch of defaunation

30 04 2016

1-2 punchI’ve just read a well-planned and lateral-thinking paper in Nature Communications that I think readers of CB.com ought to appreciate. The study is a simulation of a complex ecosystem service that would be nigh impossible to examine experimentally. Being a self-diagnosed fanatic of simulation studies for just such purposes, I took particular delight in the results.

In many ways, the results of the paper by Osuri and colleagues are intuitive, but that should never be a reason to avoid empirical demonstration of a suspected phenomenon because intuition rarely equals fact. The idea itself is straightforward, but takes more than a few logical steps to describe: Read the rest of this entry »





How to find fossils

30 03 2016

Many palaeontologists and archaeologists might be a little put out by the mere suggestion that they can be told by ecologists how to do their job better. That is certainly not our intention.

Like fossil-hunting scientists, ecologists regularly search for things (individuals of species) that are rare and difficult to find, because surveying the big wide world for biodiversity is a challenge that we have faced since the dawn of our discipline. In fact, much of the mathematical development of ecology stems from this probabilistic challenge — for example, species distribution models are an increasingly important component of both observational and predictive ecology.

IMG_1277But the palaeo types generally don’t rely on mathematical models to ‘predict’ where fossils might be hiding just under the surface. Even I’ve done what most do when trying to find a fossil — go to a place where fossils have already been found and start fossicking. I’ve done this now with very experienced sedimentary geologists in the Flinders Rangers looking for 550 million year-old Ediacaran fossils, and most recently searching for Jurassic fossils (mainly ammonites) on the southern coast of England (Devon’s Jurassic Coast). My prized ammonite find is shown in the photo to the left.

If you’ve read anything on this blog before, you’ll probably know that I’m getting increasingly excited about palaeo-ecology, with particular emphasis on Australia’s late-Pleistocene and early Holocene mass-extinction of megafauna. So with a beautiful, brand-new, shiny, and quality-rated megafauna dataset1, we cheekily decided to take fossil hunting to the next level by throwing mathematics at the problem.

Just published2 in PloS One, I’m happy to announce our newest paper entitled Where to dig for fossils: combining climate-envelope, taphonomy and discovery models.

Of course, we couldn’t just treat fossil predictions like ecological ones — there are a few more steps involved because we are dealing with long-dead specimens. Our approach therefore involved three steps: Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





No evidence climate change is to blame for Australian megafauna extinctions

29 01 2016

bw spear throwingLast July I wrote about a Science paper of ours demonstrating that there was a climate-change signal in the overall extinction pattern of megafauna across the Northern Hemisphere between about 50,000 and 10,000 years ago. In that case, it didn’t have anything to do with ice ages (sorry, Blue Sky Studios); rather, it was abrupt warming periods that exacerbated the extinction pulse instigated by human hunting.

Contrary to some appallingly researched media reports, we never claimed that these extinctions arose only from warming, because the evidence is more than clear that humans were the dominant drivers across North America, Europe and northern Asia; we simply demonstrated that warming periods had a role to play too.

A cursory glance at the title of this post without appreciating the complexity of how extinctions happen might lead you to think that we’re all over the shop with the role of climate change. Nothing could be farther from the truth.

Instead, we report what the evidence actually says, instead of making up stories to suit our preconceptions.

So it is with great pleasure that I report our new paper just out in Nature Communications, led by my affable French postdoc, Dr Frédérik SaltréClimate change not to blame for late Quaternary megafauna extinctions in Australia.

Of course, it was a huge collaborative effort by a crack team of ecologists, palaeontologists, geochronologists, paleo-climatologists, archaeologists and geneticists. Only by combining the efforts of this diverse and transdisciplinary team could we have hoped to achieve what we did. Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015
Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ led by Alan Cooper of the Australian Centre for Ancient DNA and Chris Turney of the UNSW Climate Change Research Centre demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





An appeal to extinction chronologists

2 06 2015

u7Pi3Extinction is forever, right? Yes, it’s true that once the last individual of a species dies (apart from insane notions that de-extinction will do anything to resurrect a species in perpetuity), the species is extinct. However, the answer can also be ‘no’ when you are limited by poor sampling. In other words, when you think something went extinct when in reality you just missed it.

Most of you are familiar with the concept of Lazarus1 species – when we’ve thought of something long extinct that suddenly gets re-discovered by a wandering naturalist or a wayward fisher. In paleontological (and modern conservation biological) terms, the problem is formally described as the ‘Signor-Lipps’ effect, named2 after two American palaeontologists, Phil Signor3 and Jere Lipps. It’s a fairly simple concept, but it’s unfortunately ignored in most palaeontological, and to a lesser extent, conservation studies.

The Signor-Lipps effect arises because the last (or first) evidence (fossil or sighting) of a species presence has a nearly zero chance of heralding its actual timing of extinction (or appearance). In paleontological terms, it’s easy to see why. Fossilisation is in fact a nearly impossible phenomenon – all the right conditions have to be in place for a once-living biological organism to be fossilised: it either has to be buried quickly, in a place where nothing can decompose it (usually, an anoxic environment), and then turned to rock by the process of mineral replacement. It then has to resist transformation by not undergoing metamorphosis (e.g., vulcanism, extensive crushing, etc.). For more recent specimens, preservation can occur without the mineralisation process itself (e.g., bones or flesh in an anoxic bog). Then the bloody things have to be found by a diligent geologist or palaeontologist! In other words, the chances that any one organism is preserved as a fossil after it dies are extremely small. In more modern terms, individuals can go undetected if they are extremely rare or remote, such that sighting records alone are usually insufficient to establish the true timing of extinction. The dodo is a great example of this problem. Remember too that all this works in reverse – the first fossil or observation is very much unlikely to be the first time that the species was there. Read the rest of this entry »





What’s in a name? The dingo’s sorry saga

30 01 2015

bad dingoThe more I delve into the science of predator management, the more I realise that the science itself takes a distant back seat to the politics. It would be naïve to think that the management of dingoes in Australia is any more politically charged than elsewhere, but once you start scratching beneath the surface, you quickly realise that there’s something rotten in Dubbo.

My latest contribution to this saga is a co-authored paper led by Dale Nimmo of Deakin University (along with Simon Watson of La Trobe and Dave Forsyth of Arthur Rylah) that came out just the other day. It was a response to a rather dismissive paper by Matt Hayward and Nicky Marlow claiming that all the accumulated evidence demonstrating that dingoes benefit native biodiversity was somehow incorrect.

Their two arguments were that: (1) dingoes don’t eradicate the main culprits of biodiversity decline in Australia (cats & foxes), so they cannot benefit native species; (2) proxy indices of relative dingo abundance are flawed and not related to actual abundance, so all the previous experiments and surveys are wrong.

Some strong accusations, for sure. Unfortunately, they hold no water at all. Read the rest of this entry »








Follow

Get every new post delivered to your Inbox.

Join 9,919 other followers

%d bloggers like this: