Managing for extinction

9 10 2009

ladderAh, it doesn’t go away, does it? Or at least, we won’t let it.

That concept of ‘how many is enough?’ in conservation biology, the so-called ‘minimum viable population size‘, is enough to drive some conservation practitioners batty.

How many times have we heard the (para-) phrase: “It’s simply impractical to bring populations of critically endangered species up into the thousands”?

Well, my friends, if you’re not talking thousands, you’re wasting everyone’s time and money. You are essentially managing for extinction.

Our new paper out online in Biological Conservation entitled Pragmatic population viability targets in a rapidly changing world (Traill et al.) shows that populations of endangered species are unlikely to persist in the face of global climate change and habitat loss unless they number around 5000 mature individuals or more.

After several meta-analytic, time series-based and genetic estimates of the magic minimum number all agreeing, we can be fairly certain now that if a population is much less than several thousands (median = 5000), its likelihood of persisting in the long run in the face of normal random variation is pretty small.

We conclude essentially that many conservation biologists routinely underestimate or ignore the number of animals or plants required to prevent extinction. In fact, aims to maintain tens or hundreds of individuals, when thousands are actually needed, are simply wasting precious and finite conservation resources. Thus, if it is deemed unrealistic to attain such numbers, we essentially advise that in most cases conservation triage should be invoked and the species in question be abandoned for better prospects

A long-standing idea in species restoration programs is the so-called ‘50/500’ rule; this states that at least 50 adults are required to avoid the damaging effects of inbreeding, and 500 to avoid extinctions due to the inability to evolve to cope with environmental change. Our research suggests that the 50/500 rule is at least an order of magnitude too small to stave off extinction.

This does not necessarily imply that populations smaller than 5000 are doomed. But it does highlight the challenge that small populations face in adapting to a rapidly changing world.

We are battling to prevent a mass extinction event in the face of a growing human population and its associated impact on the planet, but the bar needs to be a lot higher. However, we shouldn’t necessarily give up on critically endangered species numbering a few hundred of individuals in the wild. Acceptance that more needs to be done if we are to stop ‘managing for extinction’ should force decision makers to be more explicit about what they are aiming for, and what they are willing to trade off, when allocating conservation funds.

CJA Bradshaw

(with thanks to Lochran Traill, Barry Brook and Dick Frankham)

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

This post was chosen as an Editor's Selection for ResearchBlogging.orgResearchBlogging.org

Traill, L.W., Brook, B.W., Frankham, R.R., & Bradshaw, C.J.A. (2009). Pragmatic population viability targets in a rapidly changing world Biological Conservation DOI: 10.1016/j.biocon.2009.09.001





Wobbling to extinction

31 08 2009

crashI’ve been meaning to highlight for a while a paper that I’m finding more and more pertinent as a citation in my own work. The general theme is concerned with estimating extinction risk of a particular population, species (or even ecosystem), and more and more we’re finding that different drivers of population decline and eventual extinction often act synergistically to drive populations to that point of no return.

In other words, the whole is greater than the sum of its parts.

In other, other words, extinction risk is usually much higher than we generally appreciate.

This might seem at odds with my previous post about the tendency of the stochastic exponential growth model to over-estimate extinction risk using abundance time series, but it’s really more of a reflection of our under-appreciation of the complexity of the extinction process.

In the early days of ConservationBytes.com I highlighted a paper by Fagan & Holmes that described some of the few time series of population abundances right up until the point of extinction – the reason these datasets are so rare is because it gets bloody hard to find the last few individuals before extinction can be confirmed. Most recently, Melbourne & Hastings described in a paper entitled Extinction risk depends strongly on factors contributing to stochasticity published in Nature last year how an under-appreciated component of variation in abundance leads to under-estimation of extinction risk.

‘Demographic stochasticity’ is a fancy term for variation in the probability of births deaths at the individual level. Basically this means that there will be all sorts of complicating factors that move any individual in a population away from its expected (mean) probability of dying or reproducing. When taken as a mean over a lot of individuals, it has generally been assumed that demographic stochasticity is washed out by other forms of variation in mean (population-level) birth and death probability resulting from vagaries of the environmental context (e.g., droughts, fires, floods, etc.).

‘No, no, no’, say Melbourne & Hastings. Using some relatively simple laboratory experiments where environmental stochasticity was tightly controlled, they showed that demographic stochasticity dominated the overall variance and that environmental variation took a back seat. The upshot of all these experiments and mathematical models is that for most species of conservation concern (i.e., populations already reduced below to their minimum viable populations size), not factoring in the appropriate measures of demographic wobble means that most people are under-estimating extinction risk.

Bloody hell – we’ve been saying this for years; a few hundred individuals in any population is a ridiculous conservation target. People must instead focus on getting their favourite endangered species to number at least in the several thousands if the species is to have any hope of persisting (this is foreshadowing a paper we have coming out shortly in Biological Conservationstay tuned for a post thereupon).

Melbourne & Hastings have done a grand job in reminding us how truly susceptible small populations are to wobbling over the line and disappearing forever.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Not-so-scary maths and extinction risk

27 08 2009
© P. Horn

© P. Horn

Population viability analysis (PVA) and its cousin, minimum viable population (MVP) size estimation, are two generic categories for mathematically assessing a population’s risk of extinction under particular environmental scenarios (e.g., harvest regimes, habitat loss, etc.) (a personal plug here, for a good overview of general techniques in mathematical conservation ecology, check out our new chapter entitled ‘The Conservation Biologist’s Toolbox…’ in Sodhi & Ehrlich‘s edited book Conservation Biology for All by Oxford University Press [due out later this year]). A long-standing technique used to estimate extinction risk when the only available data for a population are in the form of population counts (abundance estimates) is the stochastic exponential growth model (SEG). Surprisingly, this little beauty is relatively good at predicting risk even though it doesn’t account for density feedback, age structure, spatial complexity or demographic stochasticity.

So, how does it work? Well, it essentially calculates the mean and variance of the population growth rate, which is just the logarithm of the ratio of an abundance estimate in one year to the abundance estimate in the previous year. These two parameters are then resampled many times to estimate the probability that abundance drops below a certain small threshold (often set arbitrarily low to something like < 50 females, etc.).

It is simple (funny how maths can become so straightforward to some people when you couch them in words rather than mathematical symbols), and rather effective. This is why a lot of people use it to prescribe conservation management interventions. You don’t have to be a modeller to use it (check out Morris & Doak’s book Quantitative Conservation Biology for a good recipe-like description).

But (there’s always a but), a new paper just published online in Conservation Letters by Bruce Kendall entitled The diffusion approximation overestimates extinction risk for count-based PVA questions the robustness when the species of interest breeds seasonally. You see, the diffusion approximation (the method used to estimate that extinction risk described above) generally assumes continuous breeding (i.e., there are always some females producing offspring). Using some very clever mathematics, simulation and a bloody good presentation, Kendall shows quite clearly that the diffusion approximation SEG over-estimates extinction risk when this happens (and it happens frequently in nature). He also offers a new simulation method to get around the problem.

Who cares, apart from some geeky maths types (I include myself in that group)? Well, considering it’s used so frequently, is easy to apply and it has major implications for species threat listings (e.g., IUCN Red List), it’s important we estimate these things as correctly as we can. Kendall shows how several species have already been misclassified for threat risk based on the old technique.

So, once again mathematics has the spotlight. Thanks, Bruce, for demonstrating how sound mathematical science can pave the way for better conservation management.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Hot inbreeding

22 07 2009
inbreeding

© R. Ballen

Sounds really disgusting a little rude, doesn’t it? Well, if you think losing species because of successive bottlenecks from harvesting, habitat loss and genetic deterioration is rude, then the title of this post is appropriate.

I’m highlighting today a paper recently published in Conservation Biology by Kristensen and colleagues entitled Linking inbreeding effects in captive populations with fitness in the wild: release of replicated Drosophila melanogaster lines under different temperatures.

The debate has been around for years – do inbred populations have lower fitness (e.g., reproductive success, survival, dispersal, etc.) than their ‘outbred’ counterparts? Is one of the reasons small populations (below their minimum viable population size) have a high risk of extinction because genetic deterioration erodes fitness?

While there are many species that seem to defy this assumption, the increasing prevalence of Allee effects, and the demonstration that threatened species have lower genetic diversity than non-threatened species, all seem to support the idea. Kristensen & colleagues’ paper uses that cornerstone of genetic guinea pigs, the Drosophila fruit fly, not only to demonstrate inbreeding depression in the lab, but also the subsequent fate of inbred individuals released into the wild.

What they found was quite amazing. Released inbred flies only did poorly (i.e., weren’t caught as frequently meaning that they probably were less successful in finding food and perished) relative to outbred flies when the temperature was warm (daytime). Cold (i.e., night) releases failed to show any difference between inbred and outbred flies.

Basically this means that the environment interacts strongly with the genetic code that signals for particularly performances. When the going is tough (and if you’re an ectothermic fly, extreme heat can be the killer), then genetically compromised individuals do badly. Another reasons to be worried about runaway global climate warming.

Another important point was that the indices of performance didn’t translate universally to the field conditions, so lab-only results might very well give us some incorrect predictions of animal performance when populations reach small sizes and become inbred.

CJA Bradshaw





Classics: Ecological Triage

27 03 2009

It is a truism that when times are tough, only the strongest pull through. This isn’t a happy concept, but in our age of burgeoning biodiversity loss (and economic belt-tightening), we have to make some difficult decisions.In this regard, I suggest Brian Walker’s1992 paper Biodiveristy and ecological redundancy makes the Classics list.

Ecological triage is, of course, taken from the medical term triage used in emergency or wartime situations. Ecological triage refers to the the conservation prioritisation of species that provide unique or necessary functions to ecosystems, and the abandonment of those that do not have unique ecosystem roles or that face almost certain extinction given they fall well below their minimum viable population size (Walker 1992). Financial resources such as investment in recovery programmes, purchase of remaining habitats for preservation, habitat restoration, etc. are allocated accordingly; the species that contribute the most to ecosystem function and have the highest probability of persisting are earmarked for conservation and others are left to their own devices (Hobbs & Kristjanson 2003).

This emotionally empty and accounting-type conservation can be controversial because public favourites like pandas, kakapo and some dolphin species just don’t make the list in many circumstances. As I’ve stated before, it makes no long-term conservation or economic sense to waste money on the doomed and ecologically redundant. Many in the conservation business apply ecological triage without being fully aware of it. Finite pools of money (generally the paltry left-overs from some green-guilty corporation or under-funded government initiative) for conservation mean that we have to set priorities – this is an entire discipline in its own right in conservation biology. Reserve design is just one example of this sacrifice-the-doomed-for-the good-of-the-ecosystem approach.

Walker (1992) advocated that we should endeavour to maintain ecosystem function first, and recommended that we abandon programmes to restore functionally ‘redundant’ species (i.e., some species are more ecologically important than others, e.g., pollinators, prey). But how do you make the choice? The wrong selection might mean an extinction cascade (Noss 1990; Walker 1992) whereby tightly linked species (e.g., parasites-hosts, pollinators-plants, predators-prey) will necessarily go extinct if one partner in the mutualism disappears (see Koh et al. 2004 on co-extinctions). Ecological redundancy is a terribly difficult thing to determine, especially given that we still understand relatively little about how complex ecological systems really work (Marris 2007).

The more common (and easier, if not theoretically weaker) approach is to prioritise areas and not species (e.g., biodiversity hotspots), but even the criteria used for area prioritisation can be somewhat arbitrary and may not necessarily guarantee the most important functional groups are maintained (Orme et al. 2005; Brooks et al. 2006). There are many different ways of establishing ‘priority’, and it depends partially on your predilections.

More recent mathematical approaches such as cost-benefit analyses (Possingham et al. 2002; Murdoch et al. 2007) advocate conservation like a CEO would run a profitable business. In this case the ‘currency’ is biodiversity, and so a fixed financial investment must maximise long-term biodiversity gains (Possingham et al. 2002). This essentially estimates the potential biodiversity saved per dollar invested, and allocates funds accordingly (Wilson et al. 2007). Where the costs outweigh the benefits, conservationists move on to more beneficial goals. Perhaps the biggest drawback with this approach is that it’s particularly data-hungry. When ecosystems are poorly measured, then the investment curve is unlikely to be very realistic.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

(Many thanks to Lochran Traill and Barry Brook for co-developing these ideas with me)





Cloning for conservation – stupid and wasteful

5 02 2009
© J. F. Jaramillo

© J. F. Jaramillo

I couldn’t have invented a better example of a Toothless conservation concept.

I just saw an article in the Independent (UK) about cloning for conservation that has rehashed the old idea yet again – while there was some interesting thoughts discussed, let’s just be clear just how stupidly inappropriate and wasteful the mere concept of cloning for biodiversity conservation really is.

1. Never mind the incredible inefficiency, the lack of success to date and the welfare issues of bringing something into existence only to suffer a short and likely painful life, the principal reason we should not even consider the technology from a conservation perspective (I have no problem considering it for other uses if developed responsibly) is that you are not addressing the real problem – mainly, the reason for extinction/endangerment in the first place. Even if you could address all the other problems (see below), if you’ve got no place to put these new individuals, the effort and money expended is an utter waste of time and money. Habitat loss is THE principal driver of extinction and endangerment. If we don’t stop and reverse this now, all other avenues are effectively closed. Cloning won’t create new forests or coral reefs, for example.

I may as well stop here, because all other arguments are minor in comparison to (1), but let’s continue just to show how many different layers of stupidity envelop this issue.

2. The loss of genetic diversity leading to inbreeding depression is a major issue that cloning cannot even begin to address. Without sufficient genetic variability, a population is almost certainly more susceptible to disease, reductions in fitness, weather extremes and over-exploitation. A paper published a few years ago by Spielman and colleagues (Most species are not driven to extinction before genetic factors impact them) showed convincingly that genetic diversity is lower in threatened than in comparable non-threatened species, and there is growing evidence on how serious Allee effects are in determining extinction risk. Populations need to number in the 1000s of genetically distinct individuals to have any chance of persisting. To postulate, even for a moment, that cloning can artificially recreate genetic diversity essential for population persistence is stupidly arrogant and irresponsible.

3. The cost. Cloning is an incredibly costly business – upwards of several millions of dollars for a single animal (see example here). Like the costs associated with most captive breeding programmes, this is a ridiculous waste of finite funds (all in the name of fabricated ‘conservation’). Think of what we could do with that money for real conservation and restoration efforts (buying conservation easements, securing rain forest property, habitat restoration, etc.). Even if we get the costs down over time, cloning will ALWAYS be more expensive than the equivalent investment in habitat restoration and protection. It’s wasteful and irresponsible to consider it otherwise.

So, if you ever read another painfully naïve article about the pros and cons of cloning endangered species, remember the above three points. I’m appalled that this continues to be taken seriously!

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: The Living Dead

30 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

Zombie_ElephantTilman, D., May, R.M., Lehman, C.L., Nowak, M.A. (1994) Habitat destruction and the extinction debt. Nature 371, 65-66

In my opinion, this is truly a conservation classic because it shatters optimistic notions that extinction is something only rarely the consequence of human activities (see relevant post here). The concept of ‘extinction debt‘ is pretty simple – as habitats become increasingly fragmented, long-lived species that are reproductively isolated from conspecifics may take generations to die off (e.g., large trees in forest fragments). This gives rise to a higher number of species than would be otherwise expected for the size of the fragment, and the false impression that many species can persist in habitat patches that are too small to sustain minimum viable populations.

These ‘living dead‘ or ‘zombie‘ species are therefore committed to extinction regardless of whether habitat loss is arrested or reversed. Only by assisted dispersal and/or reproduction can such species survive (an extremely rare event).

Why has this been important? Well, neglecting the extinction debt is one reason why some people have over-estimated the value of fragmented and secondary forests in guarding species against extinction (see relevant example here for the tropics and Brook et al. 2006). It basically means that biological communities are much less resilient to fragmentation than would otherwise be expected given data on species presence collected shortly after the main habitat degradation or destruction event. To appreciate fully the extent of expected extinctions may take generations (e.g., hundreds of years) to come to light, giving us yet another tool in the quest to minimise habitat loss and fragmentation.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





The extinction vortex

25 08 2008

One for the Potential list:

vortexFirst coined by Gilpin & Soulé in 1986, the extinction vortex is the term used to describe the process that declining populations undergo when”a mutual reinforcement occurs among biotic and abiotic processes that drives population size downward to extinction” (Brook, Sodhi & Bradshaw 2008).

Although several types of ‘vortices’ were labelled by Gilpin & Soulé, the concept was subsequently simplified by Caughley (1994) in his famous paper on the declining and small population paradigms, but only truly quantified for the first time by Fagan & Holmes (2006) in their Ecology Letters paper entitled Quantifying the extinction vortex.

Fagan and Holmes compiled a small time-series database of ten vertebrate species (two mammals, five birds, two reptiles and a fish) whose final extinction was witnessed via monitoring. They confirmed that the time to extinction scales to the logarithm of population size. In other words, as populations decline, the time elapsing before extinction occurs becomes rapidly (exponentially) smaller and smaller. They also found greater rates of population decline nearer to the time of extinction than earlier in the population’s history, confirming the expectation that genetic deterioration contributes to a general corrosion of individual performance (fitness). Finally, they found that the variability in abundance was also highest as populations approached extinction, irrespective of population size, thus demonstrating indirectly that random environmental fluctuations take over to cause the final extinction regardless of what caused the population to decline in the first place.

What does this mean for conservation efforts? It was fundamentally the first empirical demonstration that the theory of accelerating extinction proneness occurs as populations decline, meaning that all attempts must be made to ensure large population sizes if there is any chance of maintaining long-term persistence. This relates to the minimum viable population size concept that should underscore each and every recovery and target set or desired for any population in trouble or under conservation scrutiny.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Declining and small population paradigms

23 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

image0032Caughley, G. (1994). Directions in conservation biology. Journal of Animal Ecology, 63, 215-244.

Cited around 800 times according to Google Scholar, this classic paper demonstrated the essential difference between the two major paradigms dominating the discipline of conservation biology: (1) the ‘declining’ population paradigm, and the (2) ‘small’ population paradigm. The declining population paradigm is the identification and management of the processes that depress the demographic rate of a species and cause its populations to decline deterministically, whereas the small population paradigm is the study of the dynamics of small populations that have declined owing to some (deterministic) perturbation and which are more susceptible to extinction via chance (stochastic) events. Put simply, the forces that drive populations into decline aren’t necessarily those that drive the final nail into a species’ coffin – we must manage for both types of processes  simultaneously , and the synergies between them, if we want to reduce the likelihood of species going extinct.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Red List of Threatened Species

22 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

3_en_redlist_rgb_sitoMace, G.M. & Lande, R. (1991). Assessing extinction threats: toward a re-evaluation of IUCN threatened species categories. Conservation Biology, 51, 148-157.

I was recently fortunate enough to have the chance to speak with Georgina Mace, current president of the Society for Conservation Biology, to ask her which was the defining paper behind the hugely influential IUCN Red List of Threatened Species. There is little doubt that the Red List has been one of the most influential conservation policy tools constructed. Used as the global standard for the assessment of threat (i.e., extinction risk) for now > 40000 species, the Red List is the main tool by which most people judge the status, extinction risk, and recovery potential of threatened species worldwide. Far from complete (e.g., it covers about 2 % of described species), the Red List is an evolving and improving assessment by the world’s best experts. It has become very much more than just a ‘list’.

Indeed, it is used often in the conservation ecology literature as a proxy for extinction risk (although see post on Minimum Viable Population size for some counter-arguments to that idea). We’ve used it that way ourselves in several recent papers (see below), and there are plenty of other examples. From extinction theory to policy implementation, Mace & Lande’s contribution to biodiversity conservation via the Red List was a major step forward.

See also:

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Classics: Minimum Viable Population size

21 08 2008

‘Classics’ is a category of posts highlighting research that has made a real difference to biodiversity conservation. All posts in this category will be permanently displayed on the Classics page of ConservationBytes.com

Too-Few-CaloriesShaffer, M.L. (1981). Minimum population sizes for species conservation. BioScience 31, 131–134

Small and isolated populations are particularly vulnerable to extinction through random variation in birth and death rates, variation in resource or habitat availability, predation, competitive interactions and single-event catastrophes, and inbreeding. Enter the concept of the Minimum Viable Population (MVP) size, which was originally defined as the smallest number of individuals required for an isolated population to persist (at some predefined ‘high’ probability) for some ‘long’ time into the future. In other words, the MVP size is the number of individuals in the population that is needed to withstand normal (expected) variation in all the things that affect individual persistence through time. Drop below your MVP size, and suddenly your population’s risk of extinction sky-rockets. In some ways, MVP size can be considered the threshold dividing the ‘small’ and ‘declining’ population paradigms (see Caughley 1994), so that different management strategies can be applied to populations depending on their relative distance to (population-specific) MVP size.

This wonderfully simply, yet fundamental concept of extinction dynamics provides the target for species recovery, minimum reserve size and sustainable harvest if calculated correctly. Indeed, it is a concept underlying threatened species lists worldwide, including the most well-known (IUCN Red List of Threatened Species). While there are a host of methods issues, genetic considerations and policy implementation problems, Shaffer’s original paper spawned an entire generation of research and mathematical techniques in conservation biology, and set the stage for tangible, mathematically based conservation targets.

Want more information? We have published some papers and articles on the subject that elaborate more on the methods, expected ranges, subtleties and implications of the MVP concept that you can access below.

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl





Captive breeding for conservation

7 08 2008

My first attempt at this potentially rather controversial section of ConservationBytes.com. Inspired by my latest post (30/07/2008), I must comment on what I believe is one of the biggest wasters of finite conservation (financial) resources – captive breeding for population recovery. The first laureate of the Toothless category goes to 7 authors (Snyder et al.) who I believe deserve at least a round of beers for their bold paper published way back in 1996 in Conservation BiologyLimitations of captive breeding in endangered species recovery.

The paper describes basically that in most situations, captive breeding for population recovery is ill-conceived, badly planned, overly expensive and done without any notion of the particular species’ minimum viable population size (the population size required to provide a high probability of persistence over a long period). Examples of ridiculous cloning experiments done in the name of ‘conservation’ (one example with which I am familiar is the case of the SE Asian banteng cloning experiment – these conservation-challenged scientists actually claimed “We hope that the birth of these animals will open the way for a new strategy to help maintain valuable biodiversity and to respond to the challenge of large-scale extinctions ahead.” after spending amounts that would make Bill Gates blush). Come on! Minimum viable population sizes number in the thousands to tens of thousands (e.g., Brook et al. 2006; Traill et al. 2007), not to mention the genetic diversity necessary for persistence captive populations generally lack (see Frankham et al. 2004).

In the spirit of ecological triage, we must focus on conservation efforts that have a high probability of changing the extinction risk of species. Wasting millions of dollars to save a handful of inbred individuals (insert your favourite example here) WILL NOT, in most cases, make any difference to population viability (with only a few exceptions). Good on Snyder et al. (1996) for their analysis and conclusions, but zoos, laboratories and other captive-rearing organisations around the world continue to throw away millions using the ‘conservation’ rationale to justify their actions. Rubbish. I’m afraid there is little evidence that the Snyder et al. paper changed anything. (post original published in Toothless 31/07/2008).

CJA Bradshaw

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl