Biodiversity conservation is about prioritisation – making difficult choices.
With limited money and so many habitats and species in need of protection, deciding where not to expend resources is as important as deciding where to act. Saying ‘no’ will be crucial for ensuring the persistence of biodiversity and ecosystem services, simply because as individuals who value conservation, we will always be tempted to try and save everything.
In the words of Frederick the Great: “He who defends everything, defends nothing.”
As a result, much recent conservation planning research has focused on offering managers general and flexible tools for deciding which conservation features should be the highest priority. Intuitively, we should direct our resources towards areas that have high biodiversity values, and that are likely to be lost if the forces of conservation do not intervene (the most ‘vulnerable’ land parcels). This approach is known as the ‘minimize loss’ approach. Imagine we are worried about the loss of rare native vegetation in the face of ongoing urban expansion (e.g., Melbourne’s western grasslands). To minimize loss, managers would pre-emptively protect sites that are most likely to be developed. But is this decision to race the bulldozers always the best idea? How much does this choice depend on our assumptions about how land is protected, how land developers behave, and the accuracy of our future predictions?
For example, what would happen if protecting vulnerable land doesn’t stop the bulldozers, it simply diverts them towards other unprotected land nearby? What if our model of urban sprawl is incorrect, and people actually will choose to live next to the Werribee waste treatment plant? Or if a new State Government abandon Melbourne’s Urban Growth Boundary? Essentially, given how little we understand the drivers and dynamics of conservation threats, why not throw away our low-quality predictions about vulnerability, and set priorities using just biodiversity value and cost?
In a recent study that we published in Conservation Letters we asked all these questions, and used simulation models of conservation landscapes to investigate when vulnerability information is worth including. Specifically, we investigated how to choose between these three alternative decisions:
- Take conservation actions based on available vulnerability data;
- Improve our information about vulnerability; or
- Discard our vulnerability data altogether.
Our simulations considered how four different factors might affect our feelings about these alternative decisions:
- The correlation between biodiversity value and vulnerability. Do threatening processes target the most biodiverse land?
- The effect of protection on threatening processes. Does protecting the most vulnerable land make the threat disappear, or go elsewhere?
- The degree of variability in land parcel vulnerability. Is all land equally threatened?
- Uncertainty in our vulnerability estimates. How good are our models of future conservation threats?
We varied the first factor by populating an imaginary landscape with species that were more common in vulnerable or in safe habitats. We also simulated different landscapes differing on the spatial variation in vulnerability. We then simulated different effects of habitat protection on the threatening process. In one instance, we simulated a displacement effect, that is, protected areas stop the threatening process locally but not regionally, and habitat loss continues at the same pace, eventually destroying everything that is not protected. Real examples of these dynamics in many parts of the world occur with agricultural expansion or timber harvesting. These two land use practices are generally displaced by protected areas and not stopped outright.
In one other instance, we simulated an inhibition effect in which protection stops the threat locally but also reduces its rate of spread. In the real world, this is what happens when treating animals affected by infectious diseases. With inhibition, after the first vulnerable sites are either lost or protected, the rate of loss is reduced and eventually fades to zero.
Finally, we simulated the effect of uncertainty in the vulnerability estimate. That is, we provided our hypothetic conservation managers with maps of vulnerability that were incrementally altered until in an extreme case, the ‘true’ vulnerability which determined biodiversity loss, was totally uncorrelated with the vulnerability estimate used to set priorities.
Based on these different conservation simulations, we compared the achievements of two different managers. One was applying a ‘minimise loss’ approach, using their estimates of vulnerability to protect areas with high expected loss of biodiversity in the future; the other had a ‘maximise gain’ attitude, choosing areas based only on the biodiversity-per-dollar they could protect. We measured how much of the initial biodiversity value in the landscape remained after 10 simulated years of conservation prioritization.
We found that it is best to use existing information on vulnerability only when uncertainty is less than 20-30 %. Once our uncertainty gets larger than this, the risk of error becomes so high that the data must be either ignored or improved. Whether we want to invest scarce resources in this latter option depends on how variable vulnerability is across the conservation landscape. If it is high, any reduction in uncertainty causes an important gain in information about expected biodiversity loss and therefore improves the effectiveness of conservation actions. When it is low, each habitat patch has a similar chance of being lost, so what really matters is how valuable they are and you can ignore vulnerability altogether.
So what should a conservation organization do when it is time to set priorities? First of all, they have to choose meaningful quantitative conservation objectives; without objectives there is no smart conservation decision-making. Once that is done, conservation managers need to get a feeling for how much the threatening process they are trying to stop varies spatially. For example, if the threat is urbanization, a statistical model of urban sprawl based on satellite images at two different times, together with cadastral data and GIS layers of road and buildings densities can be used to assess the spatial variation on the probability that a new development occurs. If variability is noticeable, it is worth considering vulnerability in the decision-making process. An independent map of recently developed areas and development plans can then be used to validate the model and assess the level of uncertainty (e.g., confidence interval) around our estimates.
This would allow to find where the conservation problem falls in the decision space of Figure 1. But what panel should one pick? It doesn’t seem to make much of a difference in term of the choice made, (colour-coding in panels c,d); however, it does make the difference in terms of actual conservation benefit (panels a,b). Racing the bulldozers pays more if you manage to cancel some development plans (inhibition effects in Figure 1a) instead of re-allocating them somewhere else (displacement effects in Figure 1b). This requires one to understand the nature of the threatening process and the future consequences of one’s today’s decision.
Conservation threats are constantly moving and evolving. Effective prioritization – the key to using our scarce resources wisely – will depend on how well we can predict each location’s vulnerability. At the moment, the decision-support tools we have developed for conservation planning assume some knowledge about future threats. The take-home message is that we need to consider the accuracy of our underlying predictions, as well as the efficacy of our tools.
Visconti, P., Pressey, R., Bode, M., & Segan, D. (2010). Habitat vulnerability in conservation planning-when it matters and how much Conservation Letters 3: 404-414. DOI: 10.1111/j.1755-263X.2010.00130.x