There’s nothing like a bit of good, intelligent and respectful debate in science.
After the publication in Nature of our paper on tropical protected areas (Averting biodiversity collapse in tropical forest protected areas), some interesting discussion has ensued regarding some of our assumptions and validations.
As is their wont, Nature declined to publish these comments (and our responses) in the journal itself, but the new commenting feature at Nature.com allowed the exchange to be published online with the paper. Cognisant that probably few people will read this exchange, Bill Laurance and I decided to reproduce them here in full for your intellectual pleasure. Any further comments? We’d be keen to hear them.
In this paper, Laurance and co-authors have tapped the expert opinions of ‘veteran field biologists and environmental scientists’ to understand the health of protected areas in the tropics worldwide. This is a novel and interesting approach and the dataset they have gathered is very impressive. Given that expert opinion can be subject to all kinds of biases and errors, it is crucial to demonstrate that expert opinion matches empirical reality. While the authors have tried to do this by comparing their results with empirical time-series datasets, I argue that their comparison does not serve the purpose of an independent validation.
Using 59 available time-series datasets from 37 sources (journal papers, books, reports etc.), the authors find a fairly good match between expert opinion and empirical data (in 51/59 cases, expert opinion matched empirically-derived trend). For this comparison to serve as an independent validation, it is crucial that the experts were unaware of the empirical trends at the time of the interviews. However, this is unlikely to be true because, in most cases, the experts themselves were involved in the collection of the time-series datasets (at least 43/59 to my knowledge, from a scan of references in Supplementary Table 1). In other words, the same experts whose opinions were being validated were involved in collection of the data used for validation.
Sridhar raises a relevant point but one that, on careful examination, does not weaken our validation analysis. Read the rest of this entry »