Wobbling to extinction

31 08 2009

crashI’ve been meaning to highlight for a while a paper that I’m finding more and more pertinent as a citation in my own work. The general theme is concerned with estimating extinction risk of a particular population, species (or even ecosystem), and more and more we’re finding that different drivers of population decline and eventual extinction often act synergistically to drive populations to that point of no return.

In other words, the whole is greater than the sum of its parts.

In other, other words, extinction risk is usually much higher than we generally appreciate.

This might seem at odds with my previous post about the tendency of the stochastic exponential growth model to over-estimate extinction risk using abundance time series, but it’s really more of a reflection of our under-appreciation of the complexity of the extinction process.

In the early days of ConservationBytes.com I highlighted a paper by Fagan & Holmes that described some of the few time series of population abundances right up until the point of extinction – the reason these datasets are so rare is because it gets bloody hard to find the last few individuals before extinction can be confirmed. Most recently, Melbourne & Hastings described in a paper entitled Extinction risk depends strongly on factors contributing to stochasticity published in Nature last year how an under-appreciated component of variation in abundance leads to under-estimation of extinction risk.

‘Demographic stochasticity’ is a fancy term for variation in the probability of births deaths at the individual level. Basically this means that there will be all sorts of complicating factors that move any individual in a population away from its expected (mean) probability of dying or reproducing. When taken as a mean over a lot of individuals, it has generally been assumed that demographic stochasticity is washed out by other forms of variation in mean (population-level) birth and death probability resulting from vagaries of the environmental context (e.g., droughts, fires, floods, etc.).

‘No, no, no’, say Melbourne & Hastings. Using some relatively simple laboratory experiments where environmental stochasticity was tightly controlled, they showed that demographic stochasticity dominated the overall variance and that environmental variation took a back seat. The upshot of all these experiments and mathematical models is that for most species of conservation concern (i.e., populations already reduced below to their minimum viable populations size), not factoring in the appropriate measures of demographic wobble means that most people are under-estimating extinction risk.

Bloody hell – we’ve been saying this for years; a few hundred individuals in any population is a ridiculous conservation target. People must instead focus on getting their favourite endangered species to number at least in the several thousands if the species is to have any hope of persisting (this is foreshadowing a paper we have coming out shortly in Biological Conservationstay tuned for a post thereupon).

Melbourne & Hastings have done a grand job in reminding us how truly susceptible small populations are to wobbling over the line and disappearing forever.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Not-so-scary maths and extinction risk

27 08 2009
© P. Horn

© P. Horn

Population viability analysis (PVA) and its cousin, minimum viable population (MVP) size estimation, are two generic categories for mathematically assessing a population’s risk of extinction under particular environmental scenarios (e.g., harvest regimes, habitat loss, etc.) (a personal plug here, for a good overview of general techniques in mathematical conservation ecology, check out our new chapter entitled ‘The Conservation Biologist’s Toolbox…’ in Sodhi & Ehrlich‘s edited book Conservation Biology for All by Oxford University Press [due out later this year]). A long-standing technique used to estimate extinction risk when the only available data for a population are in the form of population counts (abundance estimates) is the stochastic exponential growth model (SEG). Surprisingly, this little beauty is relatively good at predicting risk even though it doesn’t account for density feedback, age structure, spatial complexity or demographic stochasticity.

So, how does it work? Well, it essentially calculates the mean and variance of the population growth rate, which is just the logarithm of the ratio of an abundance estimate in one year to the abundance estimate in the previous year. These two parameters are then resampled many times to estimate the probability that abundance drops below a certain small threshold (often set arbitrarily low to something like < 50 females, etc.).

It is simple (funny how maths can become so straightforward to some people when you couch them in words rather than mathematical symbols), and rather effective. This is why a lot of people use it to prescribe conservation management interventions. You don’t have to be a modeller to use it (check out Morris & Doak’s book Quantitative Conservation Biology for a good recipe-like description).

But (there’s always a but), a new paper just published online in Conservation Letters by Bruce Kendall entitled The diffusion approximation overestimates extinction risk for count-based PVA questions the robustness when the species of interest breeds seasonally. You see, the diffusion approximation (the method used to estimate that extinction risk described above) generally assumes continuous breeding (i.e., there are always some females producing offspring). Using some very clever mathematics, simulation and a bloody good presentation, Kendall shows quite clearly that the diffusion approximation SEG over-estimates extinction risk when this happens (and it happens frequently in nature). He also offers a new simulation method to get around the problem.

Who cares, apart from some geeky maths types (I include myself in that group)? Well, considering it’s used so frequently, is easy to apply and it has major implications for species threat listings (e.g., IUCN Red List), it’s important we estimate these things as correctly as we can. Kendall shows how several species have already been misclassified for threat risk based on the old technique.

So, once again mathematics has the spotlight. Thanks, Bruce, for demonstrating how sound mathematical science can pave the way for better conservation management.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Hot inbreeding

22 07 2009
inbreeding

© R. Ballen

Sounds really disgusting a little rude, doesn’t it? Well, if you think losing species because of successive bottlenecks from harvesting, habitat loss and genetic deterioration is rude, then the title of this post is appropriate.

I’m highlighting today a paper recently published in Conservation Biology by Kristensen and colleagues entitled Linking inbreeding effects in captive populations with fitness in the wild: release of replicated Drosophila melanogaster lines under different temperatures.

The debate has been around for years – do inbred populations have lower fitness (e.g., reproductive success, survival, dispersal, etc.) than their ‘outbred’ counterparts? Is one of the reasons small populations (below their minimum viable population size) have a high risk of extinction because genetic deterioration erodes fitness?

While there are many species that seem to defy this assumption, the increasing prevalence of Allee effects, and the demonstration that threatened species have lower genetic diversity than non-threatened species, all seem to support the idea. Kristensen & colleagues’ paper uses that cornerstone of genetic guinea pigs, the Drosophila fruit fly, not only to demonstrate inbreeding depression in the lab, but also the subsequent fate of inbred individuals released into the wild.

What they found was quite amazing. Released inbred flies only did poorly (i.e., weren’t caught as frequently meaning that they probably were less successful in finding food and perished) relative to outbred flies when the temperature was warm (daytime). Cold (i.e., night) releases failed to show any difference between inbred and outbred flies.

Basically this means that the environment interacts strongly with the genetic code that signals for particularly performances. When the going is tough (and if you’re an ectothermic fly, extreme heat can be the killer), then genetically compromised individuals do badly. Another reasons to be worried about runaway global climate warming.

Another important point was that the indices of performance didn’t translate universally to the field conditions, so lab-only results might very well give us some incorrect predictions of animal performance when populations reach small sizes and become inbred.

CJA Bradshaw





Vortex of travel to RAMAStan

9 06 2009




Just a short post to say that the frequency of posts might decline somewhat over the coming weeks. I’m currently travelling in the US on a mixture of leave and work.

From the work side of things, I’ll be heading shortly to Harvard University in Boston to spend some time with colleague Navjot Sodhi of the National University of Singapore who’s finishing up a year-long Hrdy Fellowship there. We’ll be joined by my close friend and colleague, Barry Brook, and Resit Akçakaya of RAMAS fame. We’ll be working on a few ideas regarding extinction dynamics, modelling and climate change projections for species distributions and risk.

We’ll be heading next to visit Bob Lacy of VORTEX fame at the Chicago Zoological Society. We’ll be joined by Phil Miller of the IUCN‘s Species Survival Commission (SSC) Conservation Breeding Specialist Group, JP Pollak of Cornell University, and maybe Jon Ballou of the Smithsonian National Zoological Park. We’re hoping to help take the next generation of species vulnerability software into a more realistic framework that accounts for the complexities of climate change.

I’m looking forward to the trip and meeting new colleagues.

CJA Bradshaw





Classics: Ecological Triage

27 03 2009

It is a truism that when times are tough, only the strongest pull through. This isn’t a happy concept, but in our age of burgeoning biodiversity loss (and economic belt-tightening), we have to make some difficult decisions.In this regard, I suggest Brian Walker’s1992 paper Biodiveristy and ecological redundancy makes the Classics list.

Ecological triage is, of course, taken from the medical term triage used in emergency or wartime situations. Ecological triage refers to the the conservation prioritisation of species that provide unique or necessary functions to ecosystems, and the abandonment of those that do not have unique ecosystem roles or that face almost certain extinction given they fall well below their minimum viable population size (Walker 1992). Financial resources such as investment in recovery programmes, purchase of remaining habitats for preservation, habitat restoration, etc. are allocated accordingly; the species that contribute the most to ecosystem function and have the highest probability of persisting are earmarked for conservation and others are left to their own devices (Hobbs & Kristjanson 2003).

This emotionally empty and accounting-type conservation can be controversial because public favourites like pandas, kakapo and some dolphin species just don’t make the list in many circumstances. As I’ve stated before, it makes no long-term conservation or economic sense to waste money on the doomed and ecologically redundant. Many in the conservation business apply ecological triage without being fully aware of it. Finite pools of money (generally the paltry left-overs from some green-guilty corporation or under-funded government initiative) for conservation mean that we have to set priorities – this is an entire discipline in its own right in conservation biology. Reserve design is just one example of this sacrifice-the-doomed-for-the good-of-the-ecosystem approach.

Walker (1992) advocated that we should endeavour to maintain ecosystem function first, and recommended that we abandon programmes to restore functionally ‘redundant’ species (i.e., some species are more ecologically important than others, e.g., pollinators, prey). But how do you make the choice? The wrong selection might mean an extinction cascade (Noss 1990; Walker 1992) whereby tightly linked species (e.g., parasites-hosts, pollinators-plants, predators-prey) will necessarily go extinct if one partner in the mutualism disappears (see Koh et al. 2004 on co-extinctions). Ecological redundancy is a terribly difficult thing to determine, especially given that we still understand relatively little about how complex ecological systems really work (Marris 2007).

The more common (and easier, if not theoretically weaker) approach is to prioritise areas and not species (e.g., biodiversity hotspots), but even the criteria used for area prioritisation can be somewhat arbitrary and may not necessarily guarantee the most important functional groups are maintained (Orme et al. 2005; Brooks et al. 2006). There are many different ways of establishing ‘priority’, and it depends partially on your predilections.

More recent mathematical approaches such as cost-benefit analyses (Possingham et al. 2002; Murdoch et al. 2007) advocate conservation like a CEO would run a profitable business. In this case the ‘currency’ is biodiversity, and so a fixed financial investment must maximise long-term biodiversity gains (Possingham et al. 2002). This essentially estimates the potential biodiversity saved per dollar invested, and allocates funds accordingly (Wilson et al. 2007). Where the costs outweigh the benefits, conservationists move on to more beneficial goals. Perhaps the biggest drawback with this approach is that it’s particularly data-hungry. When ecosystems are poorly measured, then the investment curve is unlikely to be very realistic.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

(Many thanks to Lochran Traill and Barry Brook for co-developing these ideas with me)





Cloning for conservation – stupid and wasteful

5 02 2009
© J. F. Jaramillo

© J. F. Jaramillo

I couldn’t have invented a better example of a Toothless conservation concept.

I just saw an article in the Independent (UK) about cloning for conservation that has rehashed the old idea yet again – while there was some interesting thoughts discussed, let’s just be clear just how stupidly inappropriate and wasteful the mere concept of cloning for biodiversity conservation really is.

1. Never mind the incredible inefficiency, the lack of success to date and the welfare issues of bringing something into existence only to suffer a short and likely painful life, the principal reason we should not even consider the technology from a conservation perspective (I have no problem considering it for other uses if developed responsibly) is that you are not addressing the real problem – mainly, the reason for extinction/endangerment in the first place. Even if you could address all the other problems (see below), if you’ve got no place to put these new individuals, the effort and money expended is an utter waste of time and money. Habitat loss is THE principal driver of extinction and endangerment. If we don’t stop and reverse this now, all other avenues are effectively closed. Cloning won’t create new forests or coral reefs, for example.

I may as well stop here, because all other arguments are minor in comparison to (1), but let’s continue just to show how many different layers of stupidity envelop this issue.

2. The loss of genetic diversity leading to inbreeding depression is a major issue that cloning cannot even begin to address. Without sufficient genetic variability, a population is almost certainly more susceptible to disease, reductions in fitness, weather extremes and over-exploitation. A paper published a few years ago by Spielman and colleagues (Most species are not driven to extinction before genetic factors impact them) showed convincingly that genetic diversity is lower in threatened than in comparable non-threatened species, and there is growing evidence on how serious Allee effects are in determining extinction risk. Populations need to number in the 1000s of genetically distinct individuals to have any chance of persisting. To postulate, even for a moment, that cloning can artificially recreate genetic diversity essential for population persistence is stupidly arrogant and irresponsible.

3. The cost. Cloning is an incredibly costly business – upwards of several millions of dollars for a single animal (see example here). Like the costs associated with most captive breeding programmes, this is a ridiculous waste of finite funds (all in the name of fabricated ‘conservation’). Think of what we could do with that money for real conservation and restoration efforts (buying conservation easements, securing rain forest property, habitat restoration, etc.). Even if we get the costs down over time, cloning will ALWAYS be more expensive than the equivalent investment in habitat restoration and protection. It’s wasteful and irresponsible to consider it otherwise.

So, if you ever read another painfully naïve article about the pros and cons of cloning endangered species, remember the above three points. I’m appalled that this continues to be taken seriously!

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: the Allee effect

22 12 2008

220px-Vortex_in_draining_bottle_of_waterAs humanity plunders its only home and continues destroying the very life that sustains our ‘success’, certain concepts in ecology, evolution and conservation biology are being examined in greater detail in an attempt to apply them to restoring at least some elements of our ravaged biodiversity.

One of these concepts has been largely overlooked in the last 30 years, but is making a conceptual comeback as the processes of extinction become better quantified. The so-called Allee effect can be broadly defined as a “…positive relationship between any component of individual fitness and either numbers or density of conspecifics” (Stephens et al. 1999, Oikos 87:185-190) and is attributed to Warder Clyde Allee, an American ecologist from the early half of the 20th century, although he himself did not coin the term. Odum referred to it as “Allee’s principle”, and over time, the concept morphed into what we now generally call ‘Allee effects’.

Nonetheless, I’m using Allee’s original 1931 book Animal Aggregations: A Study in General Sociology (University of Chicago Press) as the Classics citation here. In his book, Allee discussed the evidence for the effects of crowding on demographic and life history traits of populations, which he subsequently redefined as “inverse density dependence” (Allee 1941, American Naturalist 75:473-487).

What does all this have to do with conservation biology? Well, broadly speaking, when populations become small, many different processes may operate to make an individual’s average ‘fitness’ (measured in many ways, such as survival probability, reproductive rate, growth rate, et cetera) decline. The many and varied types of Allee effects can work together to drive populations even faster toward extinction than expected by chance alone because of self-reinforcing feedbacks (see also previous post on the small population paradigm). Thus, ignorance of potential Allee effects can bias everything from minimum viable population size estimates, restoration attempts and predictions of extinction risk.

A recent paper in the journal Trends in Ecology and Evolution by Berec and colleagues entitled Multiple Allee effects and population management gives a more specific breakdown of Allee effects in a series of definitions I reproduce here for your convenience:

Allee threshold: critical population size or density below which the per capita population growth rate becomes negative.

Anthropogenic Allee effect: mechanism relying on human activity, by which exploitation rates increase with decreasing population size or density: values associated with rarity of the exploited species exceed the costs of exploitation at small population sizes or low densities (see related post).

Component Allee effect: positive relationship between any measurable component of individual fitness and population size or density.

Demographic Allee effect: positive relationship between total individual fitness, usually quantified by the per capita population growth rate, and population size or density.

Dormant Allee effect: component Allee effect that either does not result in a demographic Allee effect or results in a weak Allee effect and which, if interacting with a strong Allee effect, causes the overall Allee threshold to be higher than the Allee threshold of the strong Allee effect alone.

Double dormancy: two component Allee effects, neither of which singly result in a demographic Allee effect, or result only in a weak Allee effect, which jointly produce an Allee threshold (i.e. the double Allee effect becomes strong).

Genetic Allee effect: genetic-level mechanism resulting in a positive relationship between any measurable fitness component and population size or density.

Human-induced Allee effect: any component Allee effect induced by a human activity.

Multiple Allee effects: any situation in which two or more component Allee effects work simultaneously in the same population.

Nonadditive Allee effects: multiple Allee effects that give rise to a demographic Allee effect with an Allee threshold greater or smaller than the algebraic sum of Allee thresholds owing to single Allee effects.

Predation-driven Allee effect: a general term for any component Allee effect in survival caused by one or multiple predators whereby the per capita predation-driven mortality rate of prey increases as prey numbers or density decline.

Strong Allee effect: demographic Allee effect with an Allee threshold.

Subadditive Allee effects: multiple Allee effects that give rise to a demographic Allee effect with an Allee threshold smaller than the algebraic sum of Allee thresholds owing to single Allee effects.

Superadditive Allee effects: multiple Allee effects that give rise to a demographic Allee effect with an Allee threshold greater than the algebraic sum of Allee thresholds owing to single Allee effects.

Weak Allee effect: demographic Allee effect without an Allee threshold.

For even more detail, I suggest you obtain the 2008 book by Courchamp and colleagues entitled Allee Effects in Ecology and Conservation (Oxford University Press).

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

(Many thanks to Salvador Herrando-Pérez for his insight on terminology)





Classics: The Living Dead

30 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

Zombie_ElephantTilman, D., May, R.M., Lehman, C.L., Nowak, M.A. (1994) Habitat destruction and the extinction debt. Nature 371, 65-66

In my opinion, this is truly a conservation classic because it shatters optimistic notions that extinction is something only rarely the consequence of human activities (see relevant post here). The concept of ‘extinction debt‘ is pretty simple – as habitats become increasingly fragmented, long-lived species that are reproductively isolated from conspecifics may take generations to die off (e.g., large trees in forest fragments). This gives rise to a higher number of species than would be otherwise expected for the size of the fragment, and the false impression that many species can persist in habitat patches that are too small to sustain minimum viable populations.

These ‘living dead‘ or ‘zombie‘ species are therefore committed to extinction regardless of whether habitat loss is arrested or reversed. Only by assisted dispersal and/or reproduction can such species survive (an extremely rare event).

Why has this been important? Well, neglecting the extinction debt is one reason why some people have over-estimated the value of fragmented and secondary forests in guarding species against extinction (see relevant example here for the tropics and Brook et al. 2006). It basically means that biological communities are much less resilient to fragmentation than would otherwise be expected given data on species presence collected shortly after the main habitat degradation or destruction event. To appreciate fully the extent of expected extinctions may take generations (e.g., hundreds of years) to come to light, giving us yet another tool in the quest to minimise habitat loss and fragmentation.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





The extinction vortex

25 08 2008

One for the Potential list:

vortexFirst coined by Gilpin & Soulé in 1986, the extinction vortex is the term used to describe the process that declining populations undergo when”a mutual reinforcement occurs among biotic and abiotic processes that drives population size downward to extinction” (Brook, Sodhi & Bradshaw 2008).

Although several types of ‘vortices’ were labelled by Gilpin & Soulé, the concept was subsequently simplified by Caughley (1994) in his famous paper on the declining and small population paradigms, but only truly quantified for the first time by Fagan & Holmes (2006) in their Ecology Letters paper entitled Quantifying the extinction vortex.

Fagan and Holmes compiled a small time-series database of ten vertebrate species (two mammals, five birds, two reptiles and a fish) whose final extinction was witnessed via monitoring. They confirmed that the time to extinction scales to the logarithm of population size. In other words, as populations decline, the time elapsing before extinction occurs becomes rapidly (exponentially) smaller and smaller. They also found greater rates of population decline nearer to the time of extinction than earlier in the population’s history, confirming the expectation that genetic deterioration contributes to a general corrosion of individual performance (fitness). Finally, they found that the variability in abundance was also highest as populations approached extinction, irrespective of population size, thus demonstrating indirectly that random environmental fluctuations take over to cause the final extinction regardless of what caused the population to decline in the first place.

What does this mean for conservation efforts? It was fundamentally the first empirical demonstration that the theory of accelerating extinction proneness occurs as populations decline, meaning that all attempts must be made to ensure large population sizes if there is any chance of maintaining long-term persistence. This relates to the minimum viable population size concept that should underscore each and every recovery and target set or desired for any population in trouble or under conservation scrutiny.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Synergies among extinction drivers

24 08 2008

Hopefully one for the Potential list:

© J. Hance

Brook, BW, NS Sodhi, CJA Bradshaw. (2008) Synergies among extinction drivers under global change. Trends in Ecology and Evolution 23, 453-460

A review my colleagues, Barry Brook and Navjot Sodhi, and I have just published in Trends in Ecology and Evolution demonstrates how separate drivers of extinction (e.g., habitat loss, over-exploitation [hunting, fishing, etc.], climate change, invasive species, etc.) tend to work together to heighten the extinction probability of the species they affect more than the simple sum of the individual effects alone.

In what we termed ‘synergies’, the review compiles evidence from observational, experimental and meta-analytic research demonstrating the positive and self-reinforcing actions of multiple drivers of population decline and eventual extinction. Examples include experimental evidence that wild radishes experiencing inbreeding depression have lower fitness than expected from simple population reduction (Elam et al. 2007), inter-tidal polychaetes succumb to pollution effects much more so at low densities than when populations are abundant (Hollows et al. 2007), and habitat fragmentation, harvest and simulated climate warming increase rotifer extinction risk up to 50 times more than expected from the additive effects of the threatening processes (Mora et al. 2007).

We argued that conservation actions only targeting single drivers will more than likely be inadequate because of the cascading effects caused by unmanaged synergies. Climate change will also interact with and accelerate ongoing threats to biodiversity, so the importance of accounting for these interactions cannot be understated.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Declining and small population paradigms

23 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

image0032Caughley, G. (1994). Directions in conservation biology. Journal of Animal Ecology, 63, 215-244.

Cited around 800 times according to Google Scholar, this classic paper demonstrated the essential difference between the two major paradigms dominating the discipline of conservation biology: (1) the ‘declining’ population paradigm, and the (2) ‘small’ population paradigm. The declining population paradigm is the identification and management of the processes that depress the demographic rate of a species and cause its populations to decline deterministically, whereas the small population paradigm is the study of the dynamics of small populations that have declined owing to some (deterministic) perturbation and which are more susceptible to extinction via chance (stochastic) events. Put simply, the forces that drive populations into decline aren’t necessarily those that drive the final nail into a species’ coffin – we must manage for both types of processes  simultaneously , and the synergies between them, if we want to reduce the likelihood of species going extinct.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Red List of Threatened Species

22 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

3_en_redlist_rgb_sitoMace, G.M. & Lande, R. (1991). Assessing extinction threats: toward a re-evaluation of IUCN threatened species categories. Conservation Biology, 51, 148-157.

I was recently fortunate enough to have the chance to speak with Georgina Mace, current president of the Society for Conservation Biology, to ask her which was the defining paper behind the hugely influential IUCN Red List of Threatened Species. There is little doubt that the Red List has been one of the most influential conservation policy tools constructed. Used as the global standard for the assessment of threat (i.e., extinction risk) for now > 40000 species, the Red List is the main tool by which most people judge the status, extinction risk, and recovery potential of threatened species worldwide. Far from complete (e.g., it covers about 2 % of described species), the Red List is an evolving and improving assessment by the world’s best experts. It has become very much more than just a ‘list’.

Indeed, it is used often in the conservation ecology literature as a proxy for extinction risk (although see post on Minimum Viable Population size for some counter-arguments to that idea). We’ve used it that way ourselves in several recent papers (see below), and there are plenty of other examples. From extinction theory to policy implementation, Mace & Lande’s contribution to biodiversity conservation via the Red List was a major step forward.

See also:

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Minimum Viable Population size

21 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

Too-Few-CaloriesShaffer, M.L. (1981). Minimum population sizes for species conservation. BioScience 31, 131–134

Small and isolated populations are particularly vulnerable to extinction through random variation in birth and death rates, variation in resource or habitat availability, predation, competitive interactions and single-event catastrophes, and inbreeding. Enter the concept of the Minimum Viable Population (MVP) size, which was originally defined as the smallest number of individuals required for an isolated population to persist (at some predefined ‘high’ probability) for some ‘long’ time into the future. In other words, the MVP size is the number of individuals in the population that is needed to withstand normal (expected) variation in all the things that affect individual persistence through time. Drop below your MVP size, and suddenly your population’s risk of extinction sky-rockets. In some ways, MVP size can be considered the threshold dividing the ‘small’ and ‘declining’ population paradigms (see Caughley 1994), so that different management strategies can be applied to populations depending on their relative distance to (population-specific) MVP size.

This wonderfully simply, yet fundamental concept of extinction dynamics provides the target for species recovery, minimum reserve size and sustainable harvest if calculated correctly. Indeed, it is a concept underlying threatened species lists worldwide, including the most well-known (IUCN Red List of Threatened Species). While there are a host of methods issues, genetic considerations and policy implementation problems, Shaffer’s original paper spawned an entire generation of research and mathematical techniques in conservation biology, and set the stage for tangible, mathematically based conservation targets.

Want more information? We have published some papers and articles on the subject that elaborate more on the methods, expected ranges, subtleties and implications of the MVP concept that you can access below.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Captive breeding for conservation

7 08 2008

My first attempt at this potentially rather controversial section of ConservationBytes.com. Inspired by my latest post (30/07/2008), I must comment on what I believe is one of the biggest wasters of finite conservation (financial) resources – captive breeding for population recovery. The first laureate of the Toothless category goes to 7 authors (Snyder et al.) who I believe deserve at least a round of beers for their bold paper published way back in 1996 in Conservation BiologyLimitations of captive breeding in endangered species recovery.

The paper describes basically that in most situations, captive breeding for population recovery is ill-conceived, badly planned, overly expensive and done without any notion of the particular species’ minimum viable population size (the population size required to provide a high probability of persistence over a long period). Examples of ridiculous cloning experiments done in the name of ‘conservation’ (one example with which I am familiar is the case of the SE Asian banteng cloning experiment – these conservation-challenged scientists actually claimed “We hope that the birth of these animals will open the way for a new strategy to help maintain valuable biodiversity and to respond to the challenge of large-scale extinctions ahead.” after spending amounts that would make Bill Gates blush). Come on! Minimum viable population sizes number in the thousands to tens of thousands (e.g., Brook et al. 2006; Traill et al. 2007), not to mention the genetic diversity necessary for persistence captive populations generally lack (see Frankham et al. 2004).

In the spirit of ecological triage, we must focus on conservation efforts that have a high probability of changing the extinction risk of species. Wasting millions of dollars to save a handful of inbred individuals (insert your favourite example here) WILL NOT, in most cases, make any difference to population viability (with only a few exceptions). Good on Snyder et al. (1996) for their analysis and conclusions, but zoos, laboratories and other captive-rearing organisations around the world continue to throw away millions using the ‘conservation’ rationale to justify their actions. Rubbish. I’m afraid there is little evidence that the Snyder et al. paper changed anything. (post original published in Toothless 31/07/2008).

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl