Thought that would get your attention ;-)
“More scientists need to be trained in quantitative synthesis, visualization and other software tools.” D. Peters (2010)
In fact, that is part of the title of today’s focus paper in Trends in Ecology and Evolution by D. Peters – Accessible ecology: synthesis of the long, deep,and broad.
As a ‘quantitative’ ecologist (modeller, numerate, etc.) whose career has been based to a large degree on the analysis of large ecological datasets, I am certainly singing Peters’ tune. However, it’s much deeper and more important than my career – good (long, deep, broad – see definitions below) ecological data are ESSENTIAL to avoid some of the worst ravages of biodiversity loss over the coming decades and centuries. Unfortunately, investment in long-term ecological studies is poor in most countries (Australia is no exception), and it’s not improving.
But why are long-term ecological data essential? Let’s take a notable example. Climate change (mainly temperature increases) measured over the last century or so (depending on the area) has been determined mainly through the analysis of long-term records. This, one of the world’s most important (yet sadly, not yet even remotely acted upon) issues today, derives from relatively simple long-term datasets. Another good example is the waning of the world’s forests (see posts here, here and here for examples) and our increasing political attention on what this means for human society. These trends can only be determined from long-term datasets.
For a long time the dirty word ‘monitoring’ was considered the bastion of the uncreative and amateur – ‘real’ scientists performed complicated experiments, whereas ‘monitoring’ was viewed mainly as a form of low-intellect showcasing to please someone somewhere that at least something was being done. I’ll admit, there are many monitoring programmes producing data that aren’t worth the paper their printed on (see a good discussion of this issue in ‘Monitoring does not always count‘), but I think the value of good monitoring data has been mostly vindicated. You see, many ecological systems are far too complex to manipulate easily, or are too broad and interactive to determine much with only a few years of data; only by examining over the ‘long’ term do patterns (and the effect of extremes) sometimes become clear.
But as you’ll see, it’s not just the ‘long’ that is required to determine which land- and sea-use decisions will be the best to minimise biodiversity loss – we also need the ‘deep’ and the ‘broad’. But first, the ‘long’.Peters states: “Long-term data are needed to assess the rate and direction of change, to distinguish directional trends from short-term variability, and to determine effects of infrequent, yet extreme events and time lags in response” and “Comparisons of trends in drivers with ecological responses can infer causal relationships.” (I’ll cite my own work there on the inferred effect of deforestation on flood risk and severity as an example). What exactly ‘long’ is is entirely a case-specific question. If the rates of change are rapid, ‘long’ can be a few short years. Generally speaking, however, a long-term dataset will cover many generations of the target organisms to be meaningful.
The limitations of long-term data inference include:
- difficulty in determining process (as opposed to phenomenon)
- sampling frequency and spatial scale can change through time, making standardisation difficult
- data formats can change over time
‘Deep‘ ecological studies are, according to Peters, “Place-based research conducted at one site or within one ecosystem type [that] can provide deep understanding of processes underlying observed patterns.” Often these involve manipulations at the landscape scale, such as adjusting fire frequency and intensity, water availability and food provision. Limitations of ‘deep’ studies include:
- limited generality
- usually insufficient to determine how ecosystems are connected
‘Broad‘ ecological studies are “Observation networks of sites collecting similar data across broad spatial extents.”. One good example is the ‘space-for-time’ concept where spatial gradients (e.g., latitude) are used as a proxy for temporal changes in temperature predicted under climate change. Another good example is the broad-scale taxonomic information available across the species within the Global Population Dynamics Database (which my colleagues and I have used extensively – e.g., Brook & Bradshaw 2006; Brook et al. 2006; Traill et al. 2007, Clark et al. 2010). Limitations include:
- comparison across different observational networks can be challenging
- limited forecasting ability without long-term data for validation
But Peters’ take-home message isn’t that any particular form of ecological investigation is more important than the rest – it’s the combination that really brings understanding and prediction into full light (indeed, this is the mainstay of any major scientific discipline). She therefore presents a framework for synthesis that incorporates the following steps:
- Data collected from different sources should be assembled into digital formats where they are available to others
- Source data need to be standardised for integration into a common database
- Source data need to be condensed into simplified formats using aggregations in time and space
- Derived data products should be blended with other knowledge sources, new technologies, and approaches
- New interpretations can inform policies, practices and actions
Peters concludes that “The need for an understanding of scientific data by the public and decision-makers is critical if solutions to environmental problems are to find general acceptance.” Thus, linking different types of ecological studies via smart synthesis is a good step toward that goal, and will hopefully help the politicians who represent us see that we cannot ignore the evidence any longer.
Peters, D. (2010). Accessible ecology: synthesis of the long, deep, and broad Trends in Ecology & Evolution DOI: 10.1016/j.tree.2010.07.005