I suppose the delve from historical/modern ecology into prehistory was inevitable given (a) my long-term association with brain-the-size-of-a-planet Barry Brook (who, incidentally, has reinvented his research career many times) and (b) there is no logic to contend that palaeo extinction patterns differ in any meaningful way from modern biodiversity extinctions (except, of course, that the latter are caused mainly by human endeavour).
So while the last, fleeting days of my holiday break accelerate worringly toward office-incarceration next week, I take this moment to present a brand-new paper of ours that has just come out online in (wait for it) Quaternary Science Reviews entitled Robust estimates of extinction time in the geological record.
Let me explain my reasons for this strange departure.
It all started after a few drinks (doesn’t it always) with Alan Cooper, Chris Turney and Barry Brook when we were discussing the uncertainties associated with the timing of megafauna extinctions – you might be aware that traditionally there have been two schools of thought on late-Pleistocene extinction pulses: (1) those who think there were mainly caused by massive climate shifts not to dissimilar to what we are experiencing now and (2) those who believe that the arrival of humans into naïve regions lead to a ‘blitzkrieg‘ of hunting and overkill. Rarely do adherents of each stance agree (and sometimes, the ‘debate’ can get ugly given the political incorrectness of inferring that prehistoric peoples were as destructive as we are today – cf. the concept of the ‘noble savage‘).
As most readers of CB might appreciate, I generally do not subscribe to the ‘one equation fits all’ hypothesis when it comes to extinctions. Close inspection of the historical record general supports the conclusion that most extinctions arise from a perverse synergy of drivers which increase kill rates beyond the mere sum of their individual effects. Thus, why human overkill and a series of large climate shifts could not have ‘worked’ in unison to drive the major extinction events recorded in the fossil record over the last 100,000 years or so has no real theoretical justification; it seems that many engaged in the debate adhere exclusively to one view or the other. To us, this is clearly a gross over-simplification.
But as I am want to do, I digress (there will be much more of this in the next few months as Alan, Chris, Barry and I finalise a few analyses on this subject for the Holarctic, late-Pleistocene megafauna extinctions). The issue about which I am writing today (and the subject of the paper in question) is the precursor to all this debate, for how can you possibly determine the contribution of possible drivers if you don’t really know when species x went extinct?
You can see where I’m going with this if you know a little about fossils. As you can appreciate, most dead things don’t fossilise, and even if they do, the rate and extent at which fossilisation occurs can be extremely variable. Plus, there’s the added complication of finding the bloody things (we haven’t yet dug up the entire surface of the planet). So the probability of an animal dying in the right place, having the right conditions for fossilisation, persisting through time in some state of preservation and being found by one of those strange people who like digging for the fossilised remains of long-dead creatures (that bizarre breed of human known as a ‘palaeontologist’) is mind-numbingly small.
Thus, trying to figure out the ‘last’ time extinct species x walked the planet isn’t as straight-forward as it might initially seem simply by dating the most recent fossil in a series of fossils. Who’s to say the ‘most recent’ is indeed that? Then, off course, there’s the added uncertainty in the dating method itself; radiocarbon methods used to date fossils from several thousand to about 60,000+ years ago have a certain margin of error that increases the farther back in time you go.
You might be beginning to get the picture – fossil records generally are pretty crap for inferring extinction times.
Now, several clever people have attempted to incorporate all this uncertainty together in fairly sophisticated statistical models to estimate the time that a species actually went extinct. One of the most famous was the application by Solow and Roberts of the Weibull distribution to a series of known dodo sightings prior to their demise (although in this case, Solow and Roberts assumed that there was no uncertainty in the dates); Solow and colleagues went on to modify the approach to incorporate radiocarbon dating uncertainty. And there are others.
However, all approaches developed to date make certain assumptions about the underlying distribution of the probability of fossilisation and discovery, and few make any attempt to correct for sampling artefacts in the time series themselves (i.e., how many fossil records are there?). Enter us.
Our paper describes a new method built on one constructed by McInerney and colleagues that incorporates most of the uncertainty, as well as making no assumptions about the underlying distribution. We call it the ‘GRIWM’ method (‘Gaussian-resampled inverse-weighted McInerney’ – I know, a clumsy mouthful, but the acronym helps) because it resamples the dates within their radiocarbon confidence bounds, and it weights the most recent fossils more heavily than older ones to account for sample-size differences among series. The McInerney method itself is based on the sighting interval (time between fossil discoveries) to predict the probability of another one being found.
While the maths might be a little impenetrable for some, it’s really a rather straight-forward approach that I hope will get a lot of use. The links to the modern biodiversity crisis are manifold – if we can decipher the set of conditions leading to some of the biggest extinction events in the history of the Earth, we should be better placed to prevent some of the worst ravages in the future