All (fisheries) models are wrong, but some are useful (to indigenous people)

1 08 2015

miracle_cartoonAnother post from Alejandro Frid. (Note: title modified from George Box‘s most excellent quote).

As an ecologist working for indigenous people of coastal British Columbia, western Canada, I live at the interface of two worlds. On the one hand, I know that computer models can be important management tools. On the other hand, my job constantly reminds me that whether a model actually improves fishery management depends, fundamentally, on the worldview that shapes the model’s objectives. To explore why, I will first review some general concepts about what models can and cannot do. After that, I will summarize a recent model of herring populations and then pull it all together in a way that matters to indigenous people who rely on marine resources for cultural integrity and food security.

Models do a great job of distilling the essence of how an ecosystem might respond to external forces—such as fisheries—but only under the specific conditions that the modeller assumes to be true in the ‘world’ of the model. Sometimes these assumptions are well-grounded in reality. Sometimes they are blatant but necessary simplifications. Otherwise, it would be difficult to ask questions about how major forces for which we have no historical precedent—such as the combined effects of industrial fisheries, ocean acidification and climate change—might be altering the ocean. For instance, due to our greenhouse gas emissions, the ocean is warming and contains less dissolved oxygen. These stressful conditions hamper the capacity of fish to grow, and appear to be on their way to shrinking the body sizes of entire fish communities1. If you want even to begin to comprehend what the ocean will look like in the long term due to these effects of climate change, it makes sense to assume, in the ‘world’ of your model, that fishing does not exist, even though you know it does. Of course, you would then acknowledge that climate change probably exacerbates the effects of fisheries, which highlights that you still have to examine the combination of these effects. And that is exactly what an excellent team of modellers did1. Read the rest of this entry »





Écologie en France

27 07 2015

DCOUVRI2This is just a quick post to update ConservationBytes.com followers about a few things I’ll be up to over the next 5 months. While I can guarantee that the posts will be more or less as frequent, some of the subject material might shift slightly given my new geographic focus.

I’m most fortunate to have been invited to spend the rest of 2015 in Franck Courchamp‘s Systematic Ecology & Evolution lab at Université Paris-Sud (also check out Franck’s blog here), and I’ll be leaving for France tomorrow. Franck is a long-time mate and colleague, who has not only previously hosted me briefly in his home in France, he and his family also put me up in Los Angeles earlier this year (where both he and his partner Muriel are on sabbatical themselves at UCLA until the end of August 2015). Franck was also kind enough to visit Adelaide last year where he gave some rather kick-arse seminars.

So what will I be doing during my mini-‘sabbatical’ with Franck? Franck is known for many things, not least of which is his reputation for being ‘King Allee Effect‘, but the main focus of my work there will be on the economic impacts of invasive insects in Europe as the climate continues to warm over the coming century. The project is financed by a large French bank (BNP-Paribas) and is known as InvaCost:

InvaCost will look at the impact on invasive insects, when climate change allows them to invade regions that are now too cold for them, but that will warm up in the coming decades. These include the red imported fire ant, the predatory Asian wasp, the disease-carrying tiger mosquito, and many others that are among the worst invaders worldwide.

Of course, that’s just the main topic. Franck is a little like me in that he’s a jack of many ecological trades, so we also plan to work on a few things like the global impacts of feral cats, some more conservation-based things, and perhaps a review or two. Lots planned for five months! Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015
Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ led by Alan Cooper of the Australian Centre for Ancient DNA and Chris Turney of the UNSW Climate Change Research Centre demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





Can we save biodiversity? Not as long as ‘democracy’ is for sale

16 07 2015
© Bill Day

© Bill Day

Like you, I’m tired of the constant battle with ill-informed politicians who claim all sorts of nonsense reasons for the bad environmental decisions they make in the name of so-called ‘democracy’. The flesh of my right hand is sore from the constant fist-bashing of tables as I let loose yet another diatribe concerning why our politicians are corrupt whores for sale to the highest bidder. My teeth are becoming worn from the nights of grinding as I lay awake contemplating why we as a society are taking more steps backward than forward.

Yes, we have politicians today claiming that “coal is good for humanity” and that climate change is a “hoax” designed by communists to disrupt society. They spew all sorts of nonsense in public about how they are making their decisions to approve yet another coal mine, limit renewable energy investments or allow continued deforestation because “it’s good for the economy”. All these despite the overwhelming evidence to the contrary.

I used to invoke the comforting feeling of intellectual superiority that these (mostly male) politicians were merely stupid, and that as a democracy, we cater to the lowest intelligence denominator of civil society (i.e., we get the politicians we deserve). However, that excuse is about as stupid as the label we give politicians who make decisions that fly in the face of all evidence. Yes, there are stupid people that have been elected to represent us, but I submit that truly stupid politicians are probably quite rare.

No. Ironically, stupidity cannot explain these moronic and generationally bankrupt decisions. Only money can. Read the rest of this entry »





National commitment to conservation brings biodiversity benefits

16 06 2015

united-nations-dayWhat makes some conservation endeavours successful where so many fail to protect biodiversity? Or, how long is a piece of string?

Yes, it’s a difficult question because it’s not just about the biology – such as resilience and area relationships – in fact, it’s probably more about the socio-economic setting that will ultimately dictate how the biodiversity in any particular area fares in response to disturbance.

In the case of protected areas (that I’ll just refer to as ‘reserves’ for the remainder of this post), there’s been a lot of work done about the things that make them ‘work’ (or not) in terms of biodiversity preservation. Yes, we can measure investment, how much the community supports and is involved with the reserve, how much emphasis is put on enforcement, the types of management done within (and outside) of the reserves, et ceteraet cetera. All of these things can (and have to some extent) been correlated with indices of the fate of the biodiversity within reserves, such as rates and patterns of deforestation, the amount of illegal hunting, and the survival probability of particular taxa.

But the problem with these indices is that there are just indices – they probably do not encapsulate the overall ‘health’ of the biodiversity within a reserve (be that trends in the overall abundance of organisms, the resilience of the community as a whole to future disturbances, or the combined phylogenetic diversity of the ecosystem). This is because there are few long-term monitoring programmes of sufficient taxonomic and temporal breadth to summarise these components of complex ecosystems (i.e., ecology is complex). It’s no real surprise, and even though we should put a lot more emphasis on targeted, efficient, long-term biodiversity monitoring inside and outside of all major biodiversity reserves, the cold, hard truth of it is that we’ll never manage to get the required systems in place. Humanity just doesn’t value it enough. Read the rest of this entry »





An appeal to extinction chronologists

2 06 2015

u7Pi3Extinction is forever, right? Yes, it’s true that once the last individual of a species dies (apart from insane notions that de-extinction will do anything to resurrect a species in perpetuity), the species is extinct. However, the answer can also be ‘no’ when you are limited by poor sampling. In other words, when you think something went extinct when in reality you just missed it.

Most of you are familiar with the concept of Lazarus1 species – when we’ve thought of something long extinct that suddenly gets re-discovered by a wandering naturalist or a wayward fisher. In paleontological (and modern conservation biological) terms, the problem is formally described as the ‘Signor-Lipps’ effect, named2 after two American palaeontologists, Phil Signor3 and Jere Lipps. It’s a fairly simple concept, but it’s unfortunately ignored in most palaeontological, and to a lesser extent, conservation studies.

The Signor-Lipps effect arises because the last (or first) evidence (fossil or sighting) of a species presence has a nearly zero chance of heralding its actual timing of extinction (or appearance). In paleontological terms, it’s easy to see why. Fossilisation is in fact a nearly impossible phenomenon – all the right conditions have to be in place for a once-living biological organism to be fossilised: it either has to be buried quickly, in a place where nothing can decompose it (usually, an anoxic environment), and then turned to rock by the process of mineral replacement. It then has to resist transformation by not undergoing metamorphosis (e.g., vulcanism, extensive crushing, etc.). For more recent specimens, preservation can occur without the mineralisation process itself (e.g., bones or flesh in an anoxic bog). Then the bloody things have to be found by a diligent geologist or palaeontologist! In other words, the chances that any one organism is preserved as a fossil after it dies are extremely small. In more modern terms, individuals can go undetected if they are extremely rare or remote, such that sighting records alone are usually insufficient to establish the true timing of extinction. The dodo is a great example of this problem. Remember too that all this works in reverse – the first fossil or observation is very much unlikely to be the first time that the species was there. Read the rest of this entry »





Statistical explainer: average temperature increases can be deceiving

12 05 2015

Beating-the-Heat-Without-PowerOver the years I’ve used a simple graphic from the IPCC 2007 Report to explain to people without a strong background in statistics just why average temperature increases can be deceiving. If you’re not well-versed in probability theory (i.e., most people), it’s perhaps understandable why so few of us appear to be up-in-arms about climate change. I posit that if people had a better appreciation of mathematics, there would be far less inertia in dealing with the problem.

Instead of using the same image, I’ve done up a few basic graphs that explain the concept of why average increases in temperature can be deceiving; in other words, I explain why focussing on the ‘average’ projected increases will not enable you to appreciate the most dangerous aspects of a disrupted climate – the frequency of extreme events. Please forgive me if you find this little explainer too basic – if you have a modicum of probability theory tucked away in your educational past, then this will be of little insight. However, you may wish to use these graphs to explain the problem to others who are less up-to-speed than you.

Let’s take, for example, all the maximum daily temperature data from a single location compiled over the last 100 years. We’ll assume for the moment that there has been no upward trend in the data over this time. If you plot the frequency of these temperatures in, say, 2-degree bins over those 100 years, you might get something like this:

ClimateVarFig0.1

This is simply an illustration, but here the long-term annual average temperature is 25 degrees Celsius, and the standard deviation is 5 degrees. In other words, over those 100 years, the average daily maximum temperature is 25 degrees, but there were a few days when the maximum was < 10 degrees, and a few others where it was > 40 degrees. This could represent a lot of different places in the world.

We can now fit what’s known as a ‘probability density function’ to this histogram to obtain a curve of expected probability of any temperature within that range:

ClimateVarFig0.2

If you’ve got some background in statistics, then you’ll know that this is simply a normal (Gaussian) distribution. With this density function, we can now calculate the probability of any particular day’s maximum temperature being above or below any particular threshold we choose. In the case of the mean (25 degrees), we know that exactly half (p = 0.50) of the days will have a maximum temperature below it, and exactly half above it. In other words, this is simply the area under the density function itself (the total area under the entire curve = 1). Read the rest of this entry »








Follow

Get every new post delivered to your Inbox.

Join 8,177 other followers

%d bloggers like this: