Putting the ‘science’ in citizen science

30 04 2014
How to tell if a koala has been in your garden. © Great Koala Count

How to tell if a koala has been in your garden. © Great Koala Count

When I was in Finland last year, I had the pleasure of meeting Tomas Roslin and hearing him describe his Finland-wide citizen-science project on dung beetles. What impressed me most was that it completely flipped my general opinion about citizen science and showed me that the process can be useful.

I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »

Cleaning up the rubbish: Australian megafauna extinctions

15 11 2013

diprotodonA few weeks ago I wrote a post about how to run the perfect scientific workshop, which most of you thought was a good set of tips (bizarrely, one person was quite upset with the message; I saved him the embarrassment of looking stupid online and refrained from publishing his comment).

As I mentioned at the end of post, the stimulus for the topic was a particularly wonderful workshop 12 of us attended at beautiful Linnaeus Estate on the northern coast of New South Wales (see Point 5 in the ‘workshop tips’ post).

But why did a group of ecological modellers (me, Barry Brook, Salvador Herrando-Pérez, Fréd Saltré, Chris Johnson, Nick Beeton), ancient DNA specialists (Alan Cooper), palaeontologists (Gav Prideaux), fossil dating specialists (Dizzy Gillespie, Bert Roberts, Zenobia Jacobs) and palaeo-climatologists (Michael Bird, Chris Turney [in absentia]) get together in the first place? Hint: it wasn’t just the for the beautiful beach and good wine.

I hate to say it – mainly because it deserves as little attention as possible – but the main reason is that we needed to clean up a bit of rubbish. The rubbish in question being the latest bit of excrescence growing on that accumulating heap produced by a certain team of palaeontologists promulgating their ‘it’s all about the climate or nothing’ broken record.

Read the rest of this entry »

Biogeography comes of age

22 08 2013

penguin biogeographyThis week has been all about biogeography for me. While I wouldn’t call myself a ‘biogeographer’, I certainly do apply a lot of the discipline’s techniques.

This week I’m attending the 2013 Association of Ecology’s (INTECOL) and British Ecological Society’s joint Congress of Ecology in London, and I have purposefully sought out more of the biogeographical talks than pretty much anything else because the speakers were engaging and the topics fascinating. As it happens, even my own presentation had a strong biogeographical flavour this year.

Although the species-area relationship (SAR) is only one small aspect of biogeography, I’ve been slightly amazed that after more than 50 years since MacArthur & Wilson’s famous book, our discipline is still obsessed with SAR.

I’ve blogged about SAR issues before – what makes it so engaging and controversial is that SAR is the principal tool to estimate overall extinction rates, even though it is perhaps one of the bluntest tools in the ecological toolbox. I suppose its popularity stems from its superficial simplicity – as the area of an (classically oceanic) island increases, so too does the total number of species it can hold. The controversies surrounding such as basic relationship centre on describing the rate of that species richness increase with area – in other words, just how nonlinear the SAR itself is.

Even a cursory understanding of maths reveals the importance of estimating this curve correctly. As the area of an ‘island’ (habitat fragment) decreases due to human disturbance, estimating how many species end up going extinct as a result depends entirely on the shape of the SAR. Get the SAR wrong, and you can over- or under-estimate the extinction rate. This was the crux of the palaver over Fangliang He (not attending INTECOL) & Stephen Hubbell’s (attending INTECOL) paper in Nature in 2011.

The first real engagement of SAR happened with John Harte’s maximum entropy talk in the process macroecology session on Tuesday. What was notable to me was his adamant claim that the power-law form of SAR should never be used, despite its commonness in the literature. I took this with a grain of salt because I know all about how messy area-richness data can be, and why one needs to consider alternate models (see an example here). But then yesterday I listened to one of the greats of biogeography – Robert Whittaker – who said pretty much the complete opposite of Harte’s contention. Whittaker showed results from one of his papers last year that the power law was in fact the most commonly supported SAR among many datasets (granted, there was substantial variability in overall model performance). My conclusion remains firm – make sure you use multiple models for each individual dataset and try to infer the SAR from model-averaging. Read the rest of this entry »

Don’t blame it on the dingo

21 08 2013

dingo angelOur postdoc, Tom Prowse, has just had one of the slickest set of reviews I’ve ever seen, followed by a quick acceptance of what I think is a pretty sexy paper. Earlier this year his paper in Journal of Animal Ecology showed that thylacine (the badly named ‘Tasmanian tiger‘) was most likely not the victim of some unobserved mystery disease, but instead succumbed to what many large predators have/will: human beings. His latest effort now online in Ecology shows that the thylacine and devil extinctions on the Australian mainland were similarly the result of humans and not the scapegoat dingo. But I’ll let him explain:

‘Regime shifts’ can occur in ecosystems when sometimes even a single component is added or changed. Such additions, of say a new predator, or changes such as a rise in temperature, can fundamentally alter core ecosystem functions and processes, causing the ecosystem to switch to some alternative stable state.

Some of the most striking examples of ecological regime shifts are the mass extinctions of large mammals (‘megafauna’) during human prehistory. In Australia, human arrival and subsequent hunting pressure is implicated in the rapid extinction of about 50 mammal species by around 45 thousand years ago. The ensuing alternative stable state was comprised of a reduced diversity of predators, dominated by humans and two native marsupial predators ‑ the thylacine (also known as the marsupial ‘tiger’ or ‘wolf’) and the devil (which is now restricted to Tasmania and threatened by a debilitating, infectious cancer).

Both thylacines and devils lasted on mainland Australia for over 40 thousand years following the arrival of humans. However, a second regime shift resulted in the extinction of both these predators by about 3 thousand years ago, which was coincidentally just after dingoes were introduced to Australia. Dingoes are descended from early domestic dogs and were introduced to northern Australia from Asia by ancient traders approximately 4 thousand years ago. Today, they are Australia’s only top predator remaining, other than invasive European foxes and feral cats. Since the earliest days of European settlement, dingoes have been persecuted because they prey on livestock. During the 1880s, 5614 km of ‘dingo fence’ was constructed to protect south-east Australia’s grazing rangelands from dingo incursions. The fence is maintained to this day, and dingoes are poisoned and shot both inside and outside this barrier, despite mounting evidence that these predators play a key role in maintaining native ecosystems, largely by suppressing invasive predators.

Perhaps because the public perception of dingoes as ‘sheep-killers’ is so firmly entrenched, it has been commonly assumed that dingoes killed off the thylacines and devils on mainland Australia. People who support this view also point out that thylacines and devils persisted on the island of Tasmania, which was never colonised by dingoes (although thylacines went extinct there too in the early 1900s). To date, most discussion of the mainland thylacine and devil extinctions has focused on the possibility that dingoes disrupted the system by ‘exploitation competition’ (eating the same prey), ‘interference competition’ (wasting the native predators’ precious munching time), as well as ‘direct predation’ (dingoes actually eating devils and thylacines). Read the rest of this entry »

Guilty until proven innocent

18 07 2013

precautionary principleThe precautionary principle – the idea that one should adopt an approach that minimises risk – is so ingrained in the mind of the conservation scientist that we often forget what it really means, or the reality of its implementation in management and policy. Indeed, it has been written about extensively in the peer-reviewed conservation literature for over 20 years at least (some examples here, here, here and here).

From a purely probabilistic viewpoint, the concept is flawlessly logical in most conservation questions. For example, if a particular by-catch of a threatened species is predicted [from a model] to result in a long-term rate of instantaneous population change (r) of -0.02 to 0.01 [uniform distribution], then even though that interval envelops r = 0, one can see that reducing the harvest rate a little more until the lower bound is greater than zero is a good idea to avoid potentially pushing down the population even more. In this way, our modelling results would recommend a policy that formally incorporates the uncertainty of our predictions without actually trying to make our classically black-and-white laws try to legislate uncertainty directly. Read the rest of this entry »

Ecology: the most important science of our times

12 07 2013

rocket-scienceThe title of this post is deliberately intended to be provocative, but stay with me – I do have an important point to make.

I’m sure most every scientist in almost any discipline feels that her or his particular knowledge quest is “the most important”. Admittedly, there are some branches of science that are more applied than others – I have yet to be convinced, for example, that string theory has an immediate human application, whereas medical science certainly does provide answers to useful questions regarding human health. But the passion for one’s own particular science discipline likely engenders a sort of tunnel vision about its intrinsic importance.

So it comes down to how one defines ‘important’. I’m not advocating in any way that application or practicality should be the only yardstick to ascertain importance. I think superficially impractical, ‘blue-skies’ theoretical endeavours are essential precursors to all so-called applied sciences. I’ll even go so far as to say that there is fundamentally no such thing as a completely unapplied science discipline or question. As I’ve said many times before, ‘science’ is a brick wall of evidence, where individual studies increase the strength of the wall to a point where we can call it a ‘theory’. Occasionally a study comes along and smashes the wall (paradigm shift), at which point we begin to build a new one. Read the rest of this entry »

Software tools for conservation biologists

8 04 2013

computer-programmingGiven the popularity of certain prescriptive posts on ConservationBytes.com, I thought it prudent to compile a list of software that my lab and I have found particularly useful over the years. This list is not meant to be comprehensive, but it will give you a taste for what’s out there. I don’t list the plethora of conservation genetics software that is available (generally given my lack of experience with it), but if this is your chosen area, I’d suggest starting with Dick Frankham‘s excellent book, An Introduction to Conservation Genetics.

1. R: If you haven’t yet loaded the open-source R programming language on your machine, do it now. It is the single-most-useful bit of statistical and programming software available to anyone anywhere in the sciences. Don’t worry if you’re not a fully fledged programmer – there are now enough people using and developing sophisticated ‘libraries’ (packages of functions) that there’s pretty much an application for everything these days. We tend to use R to the exclusion of almost any other statistical software because it makes you learn the technique rather than just blindly pressing the ‘go’ button. You could also stop right here – with R, you can do pretty much everything else that the software listed below does; however, you have to be an exceedingly clever programmer and have a lot of spare time. R can also sometimes get bogged down with too much filled RAM, in which case other, compiled languages such as PYTHON and C# are useful.

2. VORTEX/OUTBREAK/META-MODEL MANAGER, etc.: This suite of individual-based projection software was designed by Bob Lacy & Phil Miller initially to determine the viability of small (usually captive) populations. The original VORTEX has grown into a multi-purpose, powerful and sophisticated population viability analysis package that now links to its cousin applications like OUTBREAK (the only off-the-shelf epidemiological software in existence) via the ‘command centre’ META-MODEL MANAGER (see an examples here and here from our lab). There are other add-ons that make almost any population projection and hindcasting application possible. And it’s all free! (warning: currently unavailable for Mac, although I’ve been pestering Bob to do a Mac version).

3. RAMAS: RAMAS is the go-to application for spatial population modelling. Developed by the extremely clever Resit Akçakaya, this is one of the only tools that incorporates spatial meta-population aspects with formal, cohort-based demographic models. It’s also very useful in a climate-change context when you have projections of changing habitat suitability as the base layer onto which meta-population dynamics can be modelled. It’s not free, but it’s worth purchasing. Read the rest of this entry »