Quantity, but not quality – slow recovery of disturbed tropical forests

8 11 2013

tropical regrowthIt is a sobering statistic that most of the world’s tropical forests are not ‘primary’ – that is, those that have not suffered some alteration or disturbance from humans (previously logged, cleared for agriculture, burned, etc.).

Today I highlight a really cool paper that confirms this, plus adds some juicy (and disturbing – pun intended – detail). The paper by Phil Martin and colleagues just published in Proceedings of the Royal Society B came to my attention through various channels – not least of which was their citation of one of our previous papers ;-), as well as a blog post by Phil himself. I was so impressed with it that I made my first Faculty of 1000 Prime recommendation1 of the paper (which should appear shortly).

As we did in 2011 (to which Phil refers as our “soon-to-be-classic work” – thanks!), Martin and colleagues amassed a stunning number of papers investigating the species composition of disturbed and primary forests from around the tropics. Using meta-analysis, they matched disturbed and undisturbed sites, recording the following statistics: Read the rest of this entry »





Too small to avoid catastrophic biodiversity meltdown

27 09 2013
Chiew Larn

Chiew Larn Reservoir is surrounded by Khlong Saeng Wildlife Sanctuary and Khao Sok National Park, which together make up part of the largest block of rainforest habitat in southern Thailand (> 3500 km2). Photo: Antony Lynam

One of the perennial and probably most controversial topics in conservation ecology is when is something “too small’. By ‘something’ I mean many things, including population abundance and patch size. We’ve certainly written about the former on many occasions (see here, here, here and here for our work on minimum viable population size), with the associated controversy it elicited.

Now I (sadly) report on the tragedy of the second issue – when is a habitat fragment too small to be of much good to biodiversity?

Published today in the journal Science, Luke Gibson (of No substitute for primary forest fame) and a group of us report disturbing results about the ecological meltdown that has occurred on islands created when the Chiew Larn Reservoir of southern Thailand was flooded nearly 30 years ago by a hydroelectric dam.

As is the case in many parts of the world (e.g., Three Gorges Dam, China), hydroelectric dams can cause major ecological problems merely by flooding vast areas. In the case of Charn Liew, co-author Tony Lynam of Wildlife Conservation Society passed along to me a bit of poignant and emotive history about the local struggle to prevent the disaster.

“As the waters behind the dam were rising in 1987, Seub Nakasathien, the Superintendent of the Khlong Saeng Wildlife Sanctuary, his staff and conservationist friends, mounted an operation to capture and release animals that were caught in the flood waters.

It turned out to be distressing experience for all involved as you can see from the clips here, with the rescuers having only nets and longtail boats, and many animals dying. Ultimately most of the larger mammals disappeared quickly from the islands, leaving just the smaller fauna.

Later Seub moved to Huai Kha Khaeng Wildlife Sanctuary and fought an unsuccessful battle with poachers and loggers, which ended in him taking his own life in despair in 1990. A sad story, and his friend, a famous folk singer called Aed Carabao, wrote a song about Seub, the music of which plays in the video. Read the rest of this entry »





Biogeography comes of age

22 08 2013

penguin biogeographyThis week has been all about biogeography for me. While I wouldn’t call myself a ‘biogeographer’, I certainly do apply a lot of the discipline’s techniques.

This week I’m attending the 2013 Association of Ecology’s (INTECOL) and British Ecological Society’s joint Congress of Ecology in London, and I have purposefully sought out more of the biogeographical talks than pretty much anything else because the speakers were engaging and the topics fascinating. As it happens, even my own presentation had a strong biogeographical flavour this year.

Although the species-area relationship (SAR) is only one small aspect of biogeography, I’ve been slightly amazed that after more than 50 years since MacArthur & Wilson’s famous book, our discipline is still obsessed with SAR.

I’ve blogged about SAR issues before – what makes it so engaging and controversial is that SAR is the principal tool to estimate overall extinction rates, even though it is perhaps one of the bluntest tools in the ecological toolbox. I suppose its popularity stems from its superficial simplicity – as the area of an (classically oceanic) island increases, so too does the total number of species it can hold. The controversies surrounding such as basic relationship centre on describing the rate of that species richness increase with area – in other words, just how nonlinear the SAR itself is.

Even a cursory understanding of maths reveals the importance of estimating this curve correctly. As the area of an ‘island’ (habitat fragment) decreases due to human disturbance, estimating how many species end up going extinct as a result depends entirely on the shape of the SAR. Get the SAR wrong, and you can over- or under-estimate the extinction rate. This was the crux of the palaver over Fangliang He (not attending INTECOL) & Stephen Hubbell’s (attending INTECOL) paper in Nature in 2011.

The first real engagement of SAR happened with John Harte’s maximum entropy talk in the process macroecology session on Tuesday. What was notable to me was his adamant claim that the power-law form of SAR should never be used, despite its commonness in the literature. I took this with a grain of salt because I know all about how messy area-richness data can be, and why one needs to consider alternate models (see an example here). But then yesterday I listened to one of the greats of biogeography – Robert Whittaker – who said pretty much the complete opposite of Harte’s contention. Whittaker showed results from one of his papers last year that the power law was in fact the most commonly supported SAR among many datasets (granted, there was substantial variability in overall model performance). My conclusion remains firm – make sure you use multiple models for each individual dataset and try to infer the SAR from model-averaging. Read the rest of this entry »





Hades, fossilised fat-parrot shit and threatened bats

4 10 2012

WTF? © P. Bendle

Sounds like a Monty Python sketch, doesn’t it? But no, it’s about the wonderful complexity of ecology.

An interesting, and very weird paper just came out in Conservation Biology co-authored by my friend and colleague, Prof. Alan Cooper at the Australian Centre for Ancient DNA (ACAD).

Here’s what they have to say about it.

Ancient dung from a cave in the South Island of New Zealand has revealed a previously unsuspected relationship between two of the country’s most unusual threatened species.

Fossilised dung (coprolites) of a now rare parrot, the nocturnal flightless kakapo, contained large amounts of pollen of a rare parasitic plant, Dactylanthus, which lives underground and has no roots or leaves itself. The pollen suggests the kakapo was formerly an important pollinator for the threatened species, known as the Hades flower or wood rose. Researchers from the Australian Centre for Ancient DNA at The University of Adelaide, and Landcare Research and the Department of Conservation in New Zealand report the discovery in the journal Conservation Biology.

Read the rest of this entry »





No substitute for primary forest

15 09 2011

© Romulo Fotos http://goo.gl/CrAsE

A little over five years ago, a controversial and spectacularly erroneous paper appeared in the tropical ecology journal Biotropica, the flagship journal of the Association for Tropical Biology and Conservation. Now, I’m normally a fan of Biotropica (I have both published there several times and acted as a Subject Editor for several years), but we couldn’t let that paper’s conclusions go unchallenged.

That paper was ‘The future of tropical forest species‘ by Joseph Wright and Helene Muller-Landau, which essentially concluded that the severe deforestation and degradation of tropical forests was not as big a deal as nearly all the rest of the conservation biology community had concluded (remind you of climate change at all?), and that regenerating, degraded and secondary forests would suffice to preserve the enormity and majority of dependent tropical biodiversity.

What rubbish.

Our response, and those of many others (including from Toby Gardner and colleagues and William Laurance), were fast and furious, essentially destroying the argument so utterly that I think most people merely moved on. We know for a fact that tropical biodiversity is waning rapidly, and in many parts of the world, it is absolutely [insert expletive here]. However, the argument has reared its ugly head again and again over the intervening years, so it’s high time we bury this particular nonsense once and for all.

In fact, a few anecdotes are worthy of mention here. Navjot once told me one story about the time when both he and Wright were invited to the same symposium around the time of the initial dust-up in Biotropica. Being Navjot, he tore off strips from Wright in public for his outrageous and unsubstantiated claims – something to which Wright didn’t take too kindly.  On the way home, the two shared the same flight, and apparently Wright refused to acknowledge Navjot’s existence and only glared looks that could kill (hang on – maybe that had something to do with Navjot’s recent and untimely death? Who knows?). Similar public stoushes have been chronicled between Wright and Bill Laurance.

Back to the story. I recall a particular coffee discussion at the National University of Singapore between Navjot Sodhi (may his legacy endure), Barry Brook and me some time later where we planned the idea of a large meta-analysis to compare degraded and ‘primary’ (not overly disturbed) forests. The ideas were fairly fuzzy back then, but Navjot didn’t drop the ball for a moment. He immediately went out and got Tien Ming Lee and his new PhD student, Luke Gibson, to start compiling the necessary studies. It was a thankless job that took several years.

However, the fruits of that labour have now just been published in Nature: ‘Primary forests are irreplaceable for sustaining tropical biodiversity‘, led by Luke and Tien Ming, along with Lian Pin Koh, Barry Brook, Toby Gardner, Jos Barlow, Carlos Peres, me, Bill Laurance, Tom Lovejoy and of course, Navjot Sodhi [side note: Navjot died during the review and didn't survive to hear the good news that the paper was finally accepted].

Using data from 138 studies from Asia, South America and Africa comprising 2220 pair-wise comparisons of biodiversity ‘values’ between forests that had undergone some sort of disturbance (everything from selective logging through to regenerating pasture) and adjacent primary forests, we can now hammer the final nails into the coffin containing the putrid remains of Wright and Muller-Landau’s assertion – there is no substitute for primary forest. Read the rest of this entry »





The biodiversity extinction numbers game

4 01 2010

© Ferahgo the Assassin

Not an easy task, measuring extinction. For the most part, we must use techniques to estimate extinction rates because, well, it’s just bloody difficult to observe when (and where) the last few individuals in a population finally kark it. Even Fagan & Holmes’ exhaustive search of extinction time series only came up with 12 populations – not really a lot to go on. It’s also nearly impossible to observe species going extinct if they haven’t even been identified yet (and yes, probably still the majority of the world’s species – mainly small, microscopic or subsurface species – have yet to be identified).

So conservation biologists do other things to get a handle on the rates, relying mainly on the species-area relationship (SAR), projecting from threatened species lists, modelling co-extinctions (if a ‘host’ species goes extinct, then its obligate symbiont must also) or projecting declining species distributions from climate envelope models.

But of course, these are all estimates and difficult to validate. Enter a nice little review article recently published online in Biodiversity and Conservation by Nigel Stork entitled Re-assessing current extinction rates which looks at the state of the art and how the predictions mesh with the empirical data. Suffice it to say, there is a mismatch.

Stork writes that the ‘average’ estimate of losing about 100 species per day has hardly any empirical support (not surprising); only about 1200 extinctions have been recorded in the last 400 years. So why is this the case?

As mentioned above, it’s difficult to observe true extinction because of the sampling issue (the rarer the individuals, the more difficult it is to find them). He does cite some other problems too – the ‘living dead‘ concept where species linger on for decades, perhaps longer, even though their essential habitat has been destroyed, forest regrowth buffering some species that would have otherwise been predicted to go extinct under SAR models, and differing extinction proneness among species (I’ve blogged on this before).

Of course, we could just all be just a pack of doomsday wankers vainly predicting the end of the world ;-)

Well, I think not – if anything, Stork concludes that it’s all probably worse than we currently predict because of extinction synergies (see previous post about this concept) and the mounting impact of rapid global climate change. If anything, the “100 species/day” estimate could look like a utopian ideal in a few hundred years. I do disagree with Stork on one issue though – he claims that deforestation isn’t probably as bad as we make it out. I’d say the opposite (see here, here & here) – we know so little of how tropical forests in particular function that I dare say we’ve only just started measuring the tip of the iceberg.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

This post was chosen as an Editor's Selection for ResearchBlogging.org

ResearchBlogging.orgStork, N. (2009). Re-assessing current extinction rates Biodiversity and Conservation DOI: 10.1007/s10531-009-9761-9





Raise targets to prevent extinction

12 11 2009

I know I’ve blogged recently about this, but The Adelaidean did a nice little article that I thought I’d reproduce here. The source can be found here.

Adelaidean story Nov 2009





Managing for extinction

9 10 2009

ladderAh, it doesn’t go away, does it? Or at least, we won’t let it.

That concept of ‘how many is enough?’ in conservation biology, the so-called ‘minimum viable population size‘, is enough to drive some conservation practitioners batty.

How many times have we heard the (para-) phrase: “It’s simply impractical to bring populations of critically endangered species up into the thousands”?

Well, my friends, if you’re not talking thousands, you’re wasting everyone’s time and money. You are essentially managing for extinction.

Our new paper out online in Biological Conservation entitled Pragmatic population viability targets in a rapidly changing world (Traill et al.) shows that populations of endangered species are unlikely to persist in the face of global climate change and habitat loss unless they number around 5000 mature individuals or more.

After several meta-analytic, time series-based and genetic estimates of the magic minimum number all agreeing, we can be fairly certain now that if a population is much less than several thousands (median = 5000), its likelihood of persisting in the long run in the face of normal random variation is pretty small.

We conclude essentially that many conservation biologists routinely underestimate or ignore the number of animals or plants required to prevent extinction. In fact, aims to maintain tens or hundreds of individuals, when thousands are actually needed, are simply wasting precious and finite conservation resources. Thus, if it is deemed unrealistic to attain such numbers, we essentially advise that in most cases conservation triage should be invoked and the species in question be abandoned for better prospects

A long-standing idea in species restoration programs is the so-called ‘50/500’ rule; this states that at least 50 adults are required to avoid the damaging effects of inbreeding, and 500 to avoid extinctions due to the inability to evolve to cope with environmental change. Our research suggests that the 50/500 rule is at least an order of magnitude too small to stave off extinction.

This does not necessarily imply that populations smaller than 5000 are doomed. But it does highlight the challenge that small populations face in adapting to a rapidly changing world.

We are battling to prevent a mass extinction event in the face of a growing human population and its associated impact on the planet, but the bar needs to be a lot higher. However, we shouldn’t necessarily give up on critically endangered species numbering a few hundred of individuals in the wild. Acceptance that more needs to be done if we are to stop ‘managing for extinction’ should force decision makers to be more explicit about what they are aiming for, and what they are willing to trade off, when allocating conservation funds.

CJA Bradshaw

(with thanks to Lochran Traill, Barry Brook and Dick Frankham)

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

This post was chosen as an Editor's Selection for ResearchBlogging.orgResearchBlogging.org

Traill, L.W., Brook, B.W., Frankham, R.R., & Bradshaw, C.J.A. (2009). Pragmatic population viability targets in a rapidly changing world Biological Conservation DOI: 10.1016/j.biocon.2009.09.001





Classics: Fragmentation

3 10 2008
Synergies among threatening processes relative to habitat loss and fragmentation. a) A large population within unmodified, contiguous habitat occupies all available niches so that long-term abundance fluctuates near full carrying capacity (K). b) When habitat is reduced (e.g. 50 % area loss), total abundance declines accordingly. c) However, this simple habitat-abundance relationship is complicated by the spatial configuration of habitat loss. In this example, all remaining fragmented subpopulations might fall below their minimum viable population (MVP) sizes even though total abundance is the same proportion of K as in panel B. As such, limited connectivity between subpopulations implies much greater extinction risk than that predicted for the same habitat loss in less fragmented landscapes. Further synergies (positive feedbacks among threatening processes; black arrows) might accompany high fragmentation, such as enhanced penetration of predators, invasive species or wildfire, micro-habitat edge effects, and reduced resistance to drought with climate change.

Figure 2 from Brook et al. (2008): Synergies among threatening processes relative to habitat loss and fragmentation. a) A large population within unmodified, contiguous habitat occupies all available niches so that long-term abundance fluctuates near full carrying capacity (K). b) When habitat is reduced (e.g. 50 % area loss), total abundance declines accordingly. c) However, this simple habitat-abundance relationship is complicated by the spatial configuration of habitat loss. In this example, all remaining fragmented subpopulations might fall below their minimum viable population (MVP) sizes even though total abundance is the same proportion of K as in panel B. As such, limited connectivity between subpopulations implies much greater extinction risk than that predicted for the same habitat loss in less fragmented landscapes. Further synergies (positive feedbacks among threatening processes; black arrows) might accompany high fragmentation, such as enhanced penetration of predators, invasive species or wildfire, micro-habitat edge effects, and reduced resistance to drought with climate change.

This is, perhaps, one of the most important concepts that the field of conservation biology has identified as a major driver of extinction. It may appear on the surface a rather simple notion that the more ‘habitat’ you remove, the fewer species (and individuals) there will be (see MacArthur & Wilson’s Classic contribution: The Theory of Island Biogeography), but it took us decades (yes, embarrassingly – decades) to work out that fragmentation is bad (very, very bad).

Habitat fragmentation occurs when a large expanse of a particular, broadly defined habitat ‘type’ is reduced to smaller patches that are isolated by surrounding, but different habitats. The surrounding habitat is typically defined a ‘matrix’, and in the case of forest fragmentation, generally means ‘degraded’ habitat (fewer native species, urban/rural/agricultural development, etc.).

Fragmentation is bad for many reasons: it (1) reduces patch area, (2) increases isolation among populations associated with fragments, and (3) creates ‘edges’ where unmodified habitat abuts matrix habitat. Each of these has dire implications for species, for we now know that (1) the smaller an area, the fewer individuals and species in can contain, (2) the more isolated a population, the less chance immigrants will ‘rescue’ it from catastrophes, and (3) edges allow the invasion of alien species, make the microclimate intolerable, increase access to bad humans and lead to cascading ecological events (e.g., fire penetration). Make no mistake, the more fragmented an environment, the worse will be the extinction rates of species therein.

What’s particularly sad about all this is that fragmentation was actually seen as a potentially GOOD thing by conservation biologists for many long years. The so-called SLOSS (Single Large or Several Small) debate pervaded the early days of conservation literature. The debate was basically the argument that several small reserves would provide more types of habitat juxtapositions and more different species complexes, making overall diversity (species richness) higher, than one large reserve. It was an interesting, if not deluded, intellectual debate because both sides presented some rather clever theoretical and empirical arguments. Part of the attraction of the ‘Several Small’ idea was that it was generally easier to find series of small habitat fragments to preserve than one giant no-go area.

However, we now know that the ‘Several Small’ idea is completely inferior because of the myriad synergistic effects of fragmentation. It actually took Bruce Wilcox and Dennis Murphy until 1985 to bring this to everyone’s attention in their classic paper The effects of fragmentation on extinction to show how silly the SLOSS debate really was. It wasn’t, however, until the mid- to late 1990s that people finally started to accept the idea that fragmentation really was one of the biggest conservation evils. Subsequent work (that I’ll showcase soon on ConservationBytes.com) finally put the nail in the SLOSS debate coffin, and indeed, we haven’t heard a whisper of it for over a decade.

For more general information, I invite you to read the third chapter in our book Tropical Conservation Biology entitled Broken homes: tropical biotas in fragmented landscapes, and our recent paper in Trends in Ecology and Evolution entitled Synergies among extinction drivers under global change.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: The Living Dead

30 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

© M. Baysan
© M. Baysan

Tilman, D., May, R.M., Lehman, C.L., Nowak, M.A. (1994) Habitat destruction and the extinction debt. Nature 371, 65-66

In my opinion, this is truly a conservation classic because it shatters optimistic notions that extinction is something only rarely the consequence of human activities (see relevant post here). The concept of ‘extinction debt‘ is pretty simple – as habitats become increasingly fragmented, long-lived species that are reproductively isolated from conspecifics may take generations to die off (e.g., large trees in forest fragments). This gives rise to a higher number of species than would be otherwise expected for the size of the fragment, and the false impression that many species can persist in habitat patches that are too small to sustain minimum viable populations.

These ‘living dead‘ or ‘zombie‘ species are therefore committed to extinction regardless of whether habitat loss is arrested or reversed. Only by assisted dispersal and/or reproduction can such species survive (an extremely rare event).

Why has this been important? Well, neglecting the extinction debt is one reason why some people have over-estimated the value of fragmented and secondary forests in guarding species against extinction (see relevant example here for the tropics and Brook et al. 2006). It basically means that biological communities are much less resilient to fragmentation than would otherwise be expected given data on species presence collected shortly after the main habitat degradation or destruction event. To appreciate fully the extent of expected extinctions may take generations (e.g., hundreds of years) to come to light, giving us yet another tool in the quest to minimise habitat loss and fragmentation.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl








Follow

Get every new post delivered to your Inbox.

Join 5,990 other followers

%d bloggers like this: