A fairer way to rank a researcher’s relative citation performance?

23 04 2020

runningI do a lot of grant assessments for various funding agencies, including two years on the Royal Society of New Zealand’s Marsden Fund Panel (Ecology, Evolution, and Behaviour), and currently as an Australian Research Council College Expert (not to mention assessing a heap of other grant applications).

Sometimes this means I have to read hundreds of proposals made up of even more researchers, all of whom I’m meant to assess for their scientific performance over a short period of time (sometimes only within a few weeks). It’s a hard job, and I doubt very much that there’s a completely fair way to rank a researcher’s ‘performance’ quickly and efficiently.

It’s for this reason that I’ve tried to find ways to rank people in the most objective way possible. This of course does not discount reading a person’s full CV and profile, and certainly taking into consideration career breaks, opportunities, and other extenuating circumstances. But I’ve tended to do a first pass based primarily on citation indices, and then adjust those according to the extenuating circumstances.

But the ‘first pass’ part of the equation has always bothered me. We know that different fields have different rates of citation accumulation, that citations accumulate with age (including the much heralded h-index), and that there are gender (and other) biases in citations that aren’t easily corrected.

I’ve generally relied on the ‘m-index’, which is simply one’s h-index divided by the number of years one has been publishing. While this acts as a sort of age correction, it’s still unsatisfactory, essentially because I’ve noticed that it tends to penalise early career researchers in particular. I’ve tried to account for this by comparing people roughly within the same phase of career, but it’s still a subjective exercise.

I’ve recently been playing with an alternative that I think might be a way forward. Bear with me here, for it takes a bit of explaining. Read the rest of this entry »





Did people or climate kill off the megafauna? Actually, it was both

10 12 2019

When freshwater dried up, so did many megafauna species.
Centre of Excellence for Australian Biodiversity and Heritage, Author provided

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Earth is now firmly in the grips of its sixth “mass extinction event”, and it’s mainly our fault. But the modern era is definitely not the first time humans have been implicated in the extinction of a wide range of species.

In fact, starting about 60,000 years ago, many of the world’s largest animals disappeared forever. These “megafauna” were first lost in Sahul, the supercontinent formed by Australia and New Guinea during periods of low sea level.

The causes of these extinctions have been debated for decades. Possible culprits include climate change, hunting or habitat modification by the ancestors of Aboriginal people, or a combination of the two.


Read more: What is a ‘mass extinction’ and are we in one now?


The main way to investigate this question is to build timelines of major events: when species went extinct, when people arrived, and when the climate changed. This approach relies on using dated fossils from extinct species to estimate when they went extinct, and archaeological evidence to determine when people arrived.


Read more: An incredible journey: the first people to arrive in Australia came in large numbers, and on purpose


Comparing these timelines allows us to deduce the likely windows of coexistence between megafauna and people.

We can also compare this window of coexistence to long-term models of climate variation, to see whether the extinctions coincided with or shortly followed abrupt climate shifts.

Data drought

One problem with this approach is the scarcity of reliable data due to the extreme rarity of a dead animal being fossilised, and the low probability of archaeological evidence being preserved in Australia’s harsh conditions. Read the rest of this entry »





First Australians arrived in large groups using complex technologies

18 06 2019

file-20190325-36276-12v4jq2

One of the most ancient peopling events of the great diaspora of anatomically modern humans out of Africa more than 50,000 years ago — human arrival in the great continent of Sahul (New Guinea, mainland Australia & Tasmania joined during periods of low sea level) — remains mysterious. The entry routes taken, whether migration was directed or accidental, and just how many people were needed to ensure population viability are shrouded by the mists of time. This prompted us to build stochastic, age-structured human population-dynamics models incorporating hunter-gatherer demographic rates and palaeoecological reconstructions of environmental carrying capacity to predict the founding population necessary to survive the initial peopling of late-Pleistocene Sahul.

As ecological modellers, we are often asked by other scientists to attempt to render the highly complex mechanisms of entire ecosystems tractable for virtual manipulation and hypothesis testing through the inevitable simplification that is ‘a model’. When we work with scientists studying long-since-disappeared ecosystems, the challenges multiply.

Add some multidisciplinary data and concepts into the mix, and the complexity can quickly escalate.

We do have, however, some powerful tools in our modelling toolbox, so as the Modelling Node for the Australian Research Council Centre of Excellence for Australian Biodiversity and Heritage (CABAH), our role is to link disparate fields like palaeontology, archaeology, geochronology, climatology, and genetics together with mathematical ‘glue’ to answer the big questions regarding Australia’s ancient past.

This is how we tackled one of these big questions: just how did the first anatomically modern Homo sapiens make it to the continent and survive?

At that time, Australia was part of the giant continent of Sahul that connected New Guinea, mainland Australia, and Tasmania at times of lower sea level. In fact, throughout most of last ~ 126,000 years (late Pleistocene and much of the Holocene), Sahul was the dominant landmass in the region (see this handy online tool for how the coastline of Sahul changed over this period).

Read the rest of this entry »





Legacy of human migration on the diversity of languages in the Americas

12 09 2018

quechua-foto-ale-glogsterThis might seem a little left-of-centre for CB.com subject matter, but hang in there, this does have some pretty important conservation implications.

In our quest to be as transdisciplinary as possible, I’ve team up with a few people outside my discipline to put together a PhD modelling project that could really help us understand how human colonisation shaped not only ancient ecosystems, but also our own ancient cultures.

Thanks largely to the efforts of Dr Frédérik Saltré here in the Global Ecology Laboratory, at Flinders University, and in collaboration with Dr Bastien Llamas (Australian Centre for Ancient DNA), Joshua Birchall (Museu Paraense Emílio Goeldi, Brazil), and Lars Fehren-Schmitz (University of California at Santa Cruz, USA), I think the student could break down a few disciplinary boundaries here and provide real insights into the causes and consequences of human expansion into novel environments.

Interested? See below for more details?

Languages are ‘documents of history’ and historical linguists have developed comparative methods to infer patterns of human prehistory and cultural evolution. The Americas present a more substantive diversity of indigenous language stock than any other continent; however, whether such a diversity arose from initial human migration pathways across the continent is still unknown, because the primary proxy used (i.e., archaeological evidence) to study modern human migration is both too incomplete and biased to inform any regional inference of colonisation trajectories. Read the rest of this entry »





Prioritising your academic tasks

18 04 2018

The following is an abridged version of one of the chapters in my recent book, The Effective Scientist, regarding how to prioritise your tasks in academia. For a more complete treatise of the issue, access the full book here.

splitting tasks

Splitting tasks. © René Campbell renecampbellart.com

How the hell do you balance all the requirements of an academic life in science? From actually doing the science, analysing the data, writing papers, reviewing, writing grants, to mentoring students — not to mention trying to have a modicum of a life outside of the lab — you can quickly end up feeling a little daunted. While there is no empirical formula that make you run your academic life efficiently all the time, I can offer a few suggestions that might make your life just a little less chaotic.

Priority 1: Revise articles submitted to high-ranked journals

Barring a family emergency, my top priority is always revising an article that has been sent back to me from a high-ranking journal for revisions. Spend the necessary time to complete the necessary revisions.

Priority 2: Revise articles submitted to lower-ranked journals

I could have lumped this priority with the previous, but I think it is necessary to distinguish the two should you find yourself in the fortunate position of having to do more than one revision at a time.

Priority 3: Experimentation and field work

Most of us need data before we can write papers, so this is high on my personal priority list. If field work is required, then obviously this will be your dominant preoccupation for sometimes extended periods. Many experiments can also be highly time-consuming, while others can be done in stages or run in the background while you complete other tasks.

Priority 4: Databasing

This one could be easily forgotten, but it is a task that can take up a disproportionate amount of your time if do not deliberately fit it into your schedule. Well-organised, abundantly meta-tagged, intuitive, and backed-up databases are essential for effective scientific analysis; good data are useless if you cannot find them or understand to what they refer. Read the rest of this entry »





The Effective Scientist

22 03 2018

final coverWhat is an effective scientist?

The more I have tried to answer this question, the more it has eluded me. Before I even venture an attempt, it is necessary to distinguish the more esoteric term ‘effective’ from the more pedestrian term ‘success’. Even ‘success’ can be defined and quantified in many different ways. Is the most successful scientist the one who publishes the most papers, gains the most citations, earns the most grant money, gives the most keynote addresses, lectures the most undergraduate students, supervises the most PhD students, appears on the most television shows, or the one whose results improves the most lives? The unfortunate and wholly unsatisfying answer to each of those components is ‘yes’, but neither is the answer restricted to the superlative of any one of those. What I mean here is that you need to do reasonably well (i.e., relative to your peers, at any rate) in most of these things if you want to be considered ‘successful’. The relative contribution of your performance in these components will vary from person to person, and from discipline to discipline, but most undeniably ‘successful’ scientists do well in many or most of these areas.

That’s the opening paragraph for my new book that has finally been release for sale today in the United Kingdom and Europe (the Australasian release is scheduled for 7 April, and 30 April for North America). Published by Cambridge University Press, The Effective ScientistA Handy Guide to a Successful Academic Career is the culmination of many years of work on all the things an academic scientist today needs to know, but was never taught formally.

Several people have asked me why I decided to write this book, so a little history of its genesis is in order. I suppose my over-arching drive was to create something that I sincerely wish had existed when I was a young scientist just starting out on the academic career path. I was focussed on learning my science, and didn’t necessarily have any formal instruction in all the other varied duties I’d eventually be expected to do well, from how to write papers efficiently, to how to review properly, how to manage my grant money, how to organise and store my data, how to run a lab smoothly, how to get the most out of a conference, how to deal with the media, to how to engage in social media effectively (even though the latter didn’t really exist yet at the time) — all of these so-called ‘extra-curricular’ activities associated with an academic career were things I would eventually just have to learn as I went along. I’m sure you’ll agree, there has to be a better way than just muddling through one’s career picking up haphazard experience. Read the rest of this entry »





Two new postdoctoral positions in ecological network & vegetation modelling announced

21 07 2017

19420366_123493528240028_621031473222812853_n

With the official start of the new ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH) in July, I am pleased to announce two new CABAH-funded postdoctoral positions (a.k.a. Research Associates) in my global ecology lab at Flinders University in Adelaide (Flinders Modelling Node).

One of these positions is a little different, and represents something of an experiment. The Research Associate in Palaeo-Vegetation Modelling is being restricted to women candidates; in other words, we’re only accepting applications from women for this one. In a quest to improve the gender balance in my lab and in universities in general, this is a step in the right direction.

The project itself is not overly prescribed, but we would like something along the following lines of inquiry: Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015
Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





School finishers and undergraduates ill-prepared for research careers

22 05 2014

bad mathsHaving been for years now at the pointy end of the educational pathway training the next generation of scientists, I’d like to share some of my observations regarding how well we’re doing. At least in Australia, my realistic assessment of science education is: not well at all.

I’ve been thinking about this for some time, but only now decided to put my thoughts into words as the train wreck of our current government lurches toward a future guaranteeing an even stupider society. Charging postgraduate students to do PhDs for the first time, encouraging a US-style system of wealth-based educational privilege, slashing education budgets and de-investing in science while promoting the belief in invisible spaghetti monsters from space, are all the latest in the Fiberal future nightmare that will change our motto to “Australia – the stupid country”.

As you can appreciate, I’m not filled with a lot of hope that the worrying trends I’ve observed over the past 10 years or so are going to get any better any time soon. To be fair though, the problems go beyond the latest stupidities of the Fiberal government.

My realisation that there was a problem has crystallised only recently as I began to notice that most of my lab members were not Australian. In fact, the percentage of Australian PhD students and post-doctoral fellows in the lab usually hovers around 20%. Another sign of a problem was that even when we advertised for several well-paid postdoctoral positions, not a single Australian made the interview list (in fact, few Australians applied at all). I’ve also talked to many of my colleagues around Australia in the field of quantitative ecology, and many lament the same general trend.

Is it just poor mathematical training? Yes and no. Australian universities have generally lowered their entry-level requirements for basic maths, thereby perpetuating the already poor skill base of school leavers. Why? Bums (that pay) on seats. This means that people like me struggle to find Australian candidates that can do the quantitative research we need done. We are therefore forced to look overseas. Read the rest of this entry »





Putting the ‘science’ in citizen science

30 04 2014
How to tell if a koala has been in your garden. © Great Koala Count

How to tell if a koala has been in your garden. © Great Koala Count

When I was in Finland last year, I had the pleasure of meeting Tomas Roslin and hearing him describe his Finland-wide citizen-science project on dung beetles. What impressed me most was that it completely flipped my general opinion about citizen science and showed me that the process can be useful.

I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »





Cleaning up the rubbish: Australian megafauna extinctions

15 11 2013

diprotodonA few weeks ago I wrote a post about how to run the perfect scientific workshop, which most of you thought was a good set of tips (bizarrely, one person was quite upset with the message; I saved him the embarrassment of looking stupid online and refrained from publishing his comment).

As I mentioned at the end of post, the stimulus for the topic was a particularly wonderful workshop 12 of us attended at beautiful Linnaeus Estate on the northern coast of New South Wales (see Point 5 in the ‘workshop tips’ post).

But why did a group of ecological modellers (me, Barry Brook, Salvador Herrando-Pérez, Fréd Saltré, Chris Johnson, Nick Beeton), geneticists, palaeontologists (Gav Prideaux), fossil dating specialists (Dizzy Gillespie, Bert Roberts, Zenobia Jacobs) and palaeo-climatologists (Michael Bird, Chris Turney [in absentia]) get together in the first place? Hint: it wasn’t just the for the beautiful beach and good wine.

I hate to say it – mainly because it deserves as little attention as possible – but the main reason is that we needed to clean up a bit of rubbish. The rubbish in question being the latest bit of excrescence growing on that accumulating heap produced by a certain team of palaeontologists promulgating their ‘it’s all about the climate or nothing’ broken record.

Read the rest of this entry »





Biogeography comes of age

22 08 2013

penguin biogeographyThis week has been all about biogeography for me. While I wouldn’t call myself a ‘biogeographer’, I certainly do apply a lot of the discipline’s techniques.

This week I’m attending the 2013 Association of Ecology’s (INTECOL) and British Ecological Society’s joint Congress of Ecology in London, and I have purposefully sought out more of the biogeographical talks than pretty much anything else because the speakers were engaging and the topics fascinating. As it happens, even my own presentation had a strong biogeographical flavour this year.

Although the species-area relationship (SAR) is only one small aspect of biogeography, I’ve been slightly amazed that after more than 50 years since MacArthur & Wilson’s famous book, our discipline is still obsessed with SAR.

I’ve blogged about SAR issues before – what makes it so engaging and controversial is that SAR is the principal tool to estimate overall extinction rates, even though it is perhaps one of the bluntest tools in the ecological toolbox. I suppose its popularity stems from its superficial simplicity – as the area of an (classically oceanic) island increases, so too does the total number of species it can hold. The controversies surrounding such as basic relationship centre on describing the rate of that species richness increase with area – in other words, just how nonlinear the SAR itself is.

Even a cursory understanding of maths reveals the importance of estimating this curve correctly. As the area of an ‘island’ (habitat fragment) decreases due to human disturbance, estimating how many species end up going extinct as a result depends entirely on the shape of the SAR. Get the SAR wrong, and you can over- or under-estimate the extinction rate. This was the crux of the palaver over Fangliang He (not attending INTECOL) & Stephen Hubbell’s (attending INTECOL) paper in Nature in 2011.

The first real engagement of SAR happened with John Harte’s maximum entropy talk in the process macroecology session on Tuesday. What was notable to me was his adamant claim that the power-law form of SAR should never be used, despite its commonness in the literature. I took this with a grain of salt because I know all about how messy area-richness data can be, and why one needs to consider alternate models (see an example here). But then yesterday I listened to one of the greats of biogeography – Robert Whittaker – who said pretty much the complete opposite of Harte’s contention. Whittaker showed results from one of his papers last year that the power law was in fact the most commonly supported SAR among many datasets (granted, there was substantial variability in overall model performance). My conclusion remains firm – make sure you use multiple models for each individual dataset and try to infer the SAR from model-averaging. Read the rest of this entry »





Don’t blame it on the dingo

21 08 2013

dingo angelOur postdoc, Tom Prowse, has just had one of the slickest set of reviews I’ve ever seen, followed by a quick acceptance of what I think is a pretty sexy paper. Earlier this year his paper in Journal of Animal Ecology showed that thylacine (the badly named ‘Tasmanian tiger‘) was most likely not the victim of some unobserved mystery disease, but instead succumbed to what many large predators have/will: human beings. His latest effort now online in Ecology shows that the thylacine and devil extinctions on the Australian mainland were similarly the result of humans and not the scapegoat dingo. But I’ll let him explain:

‘Regime shifts’ can occur in ecosystems when sometimes even a single component is added or changed. Such additions, of say a new predator, or changes such as a rise in temperature, can fundamentally alter core ecosystem functions and processes, causing the ecosystem to switch to some alternative stable state.

Some of the most striking examples of ecological regime shifts are the mass extinctions of large mammals (‘megafauna’) during human prehistory. In Australia, human arrival and subsequent hunting pressure is implicated in the rapid extinction of about 50 mammal species by around 45 thousand years ago. The ensuing alternative stable state was comprised of a reduced diversity of predators, dominated by humans and two native marsupial predators ‑ the thylacine (also known as the marsupial ‘tiger’ or ‘wolf’) and the devil (which is now restricted to Tasmania and threatened by a debilitating, infectious cancer).

Both thylacines and devils lasted on mainland Australia for over 40 thousand years following the arrival of humans. However, a second regime shift resulted in the extinction of both these predators by about 3 thousand years ago, which was coincidentally just after dingoes were introduced to Australia. Dingoes are descended from early domestic dogs and were introduced to northern Australia from Asia by ancient traders approximately 4 thousand years ago. Today, they are Australia’s only top predator remaining, other than invasive European foxes and feral cats. Since the earliest days of European settlement, dingoes have been persecuted because they prey on livestock. During the 1880s, 5614 km of ‘dingo fence’ was constructed to protect south-east Australia’s grazing rangelands from dingo incursions. The fence is maintained to this day, and dingoes are poisoned and shot both inside and outside this barrier, despite mounting evidence that these predators play a key role in maintaining native ecosystems, largely by suppressing invasive predators.

Perhaps because the public perception of dingoes as ‘sheep-killers’ is so firmly entrenched, it has been commonly assumed that dingoes killed off the thylacines and devils on mainland Australia. People who support this view also point out that thylacines and devils persisted on the island of Tasmania, which was never colonised by dingoes (although thylacines went extinct there too in the early 1900s). To date, most discussion of the mainland thylacine and devil extinctions has focused on the possibility that dingoes disrupted the system by ‘exploitation competition’ (eating the same prey), ‘interference competition’ (wasting the native predators’ precious munching time), as well as ‘direct predation’ (dingoes actually eating devils and thylacines). Read the rest of this entry »





Guilty until proven innocent

18 07 2013

precautionary principleThe precautionary principle – the idea that one should adopt an approach that minimises risk – is so ingrained in the mind of the conservation scientist that we often forget what it really means, or the reality of its implementation in management and policy. Indeed, it has been written about extensively in the peer-reviewed conservation literature for over 20 years at least (some examples here, here, here and here).

From a purely probabilistic viewpoint, the concept is flawlessly logical in most conservation questions. For example, if a particular by-catch of a threatened species is predicted [from a model] to result in a long-term rate of instantaneous population change (r) of -0.02 to 0.01 [uniform distribution], then even though that interval envelops r = 0, one can see that reducing the harvest rate a little more until the lower bound is greater than zero is a good idea to avoid potentially pushing down the population even more. In this way, our modelling results would recommend a policy that formally incorporates the uncertainty of our predictions without actually trying to make our classically black-and-white laws try to legislate uncertainty directly. Read the rest of this entry »





Ecology: the most important science of our times

12 07 2013

rocket-scienceThe title of this post is deliberately intended to be provocative, but stay with me – I do have an important point to make.

I’m sure most every scientist in almost any discipline feels that her or his particular knowledge quest is “the most important”. Admittedly, there are some branches of science that are more applied than others – I have yet to be convinced, for example, that string theory has an immediate human application, whereas medical science certainly does provide answers to useful questions regarding human health. But the passion for one’s own particular science discipline likely engenders a sort of tunnel vision about its intrinsic importance.

So it comes down to how one defines ‘important’. I’m not advocating in any way that application or practicality should be the only yardstick to ascertain importance. I think superficially impractical, ‘blue-skies’ theoretical endeavours are essential precursors to all so-called applied sciences. I’ll even go so far as to say that there is fundamentally no such thing as a completely unapplied science discipline or question. As I’ve said many times before, ‘science’ is a brick wall of evidence, where individual studies increase the strength of the wall to a point where we can call it a ‘theory’. Occasionally a study comes along and smashes the wall (paradigm shift), at which point we begin to build a new one. Read the rest of this entry »





Software tools for conservation biologists

8 04 2013

computer-programmingGiven the popularity of certain prescriptive posts on ConservationBytes.com, I thought it prudent to compile a list of software that my lab and I have found particularly useful over the years. This list is not meant to be comprehensive, but it will give you a taste for what’s out there. I don’t list the plethora of conservation genetics software that is available (generally given my lack of experience with it), but if this is your chosen area, I’d suggest starting with Dick Frankham‘s excellent book, An Introduction to Conservation Genetics.

1. R: If you haven’t yet loaded the open-source R programming language on your machine, do it now. It is the single-most-useful bit of statistical and programming software available to anyone anywhere in the sciences. Don’t worry if you’re not a fully fledged programmer – there are now enough people using and developing sophisticated ‘libraries’ (packages of functions) that there’s pretty much an application for everything these days. We tend to use R to the exclusion of almost any other statistical software because it makes you learn the technique rather than just blindly pressing the ‘go’ button. You could also stop right here – with R, you can do pretty much everything else that the software listed below does; however, you have to be an exceedingly clever programmer and have a lot of spare time. R can also sometimes get bogged down with too much filled RAM, in which case other, compiled languages such as PYTHON and C# are useful.

2. VORTEX/OUTBREAK/META-MODEL MANAGER, etc.: This suite of individual-based projection software was designed by Bob Lacy & Phil Miller initially to determine the viability of small (usually captive) populations. The original VORTEX has grown into a multi-purpose, powerful and sophisticated population viability analysis package that now links to its cousin applications like OUTBREAK (the only off-the-shelf epidemiological software in existence) via the ‘command centre’ META-MODEL MANAGER (see an examples here and here from our lab). There are other add-ons that make almost any population projection and hindcasting application possible. And it’s all free! (warning: currently unavailable for Mac, although I’ve been pestering Bob to do a Mac version).

3. RAMAS: RAMAS is the go-to application for spatial population modelling. Developed by the extremely clever Resit Akçakaya, this is one of the only tools that incorporates spatial meta-population aspects with formal, cohort-based demographic models. It’s also very useful in a climate-change context when you have projections of changing habitat suitability as the base layer onto which meta-population dynamics can be modelled. It’s not free, but it’s worth purchasing. Read the rest of this entry »





Ecology is a Tower of Babel

17 09 2012

The term ‘ecology’ in 16 different languages overlaid on the oil on board ‘The Tower of Babel’ by Flemish Renaissance painter Pieter Bruegel the Elder (1563).

In his song ‘Balada de Babel’, the Spanish artist Luis Eduardo Aute sings several lyrics in unison with the same melody. The effect is a wonderful mess. This is what the scientific literature sounds like when authors generate synonymies (equivalent meaning) and polysemies (multiple meanings), or coin terms to show a point of view. In our recent paper published in Oecologia, we illustrate this problem with regard to ‘density dependence’: a key ecological concept. While the biblical reference is somewhat galling to our atheist dispositions, the analogy is certainly appropriate.

A giant shoal of herring zigzagging in response to a predator; a swarm of social bees tending the multitudinous offspring of their queen; a dense pine forest depriving its own seedlings from light; an over-harvested population of lobsters where individuals can hardly find reproductive mates; pioneering strands of a seaweed colonising a foreign sea after a transoceanic trip attached to the hulk of boat; respiratory parasites spreading in a herd of caribou; or malaria protozoans making their way between mosquitoes and humans – these are all examples of population processes that operate under a density check. The number of individuals within those groups of organisms determines their chances for reproduction, survival or dispersal, which we (ecologists) measure as ‘demographic rates’ (e.g., number of births per mother, number of deaths between consecutive years, or number of immigrants per hectare).

In ecology, the causal relationship between the size of a population and a demographic rate is known as ‘density dependence’ (DD hereafter). This relationship captures the pace at which a demographic rate changes as population size varies in time and/or space. We use DD measurements to infer the operation of social and trophic interactions (cannibalism, competition, cooperation, disease, herbivory, mutualism, parasitism, parasitoidism, predation, reproductive behaviour and the like) between individuals within a population1,2, because the intensity of these interactions varies with population size. Thus, as a population of caribou expands, respiratory parasites will have an easier job to disperse from one animal to another. As the booming parasites breed, increased infestations will kill the weakest caribou or reduce the fertility of females investing too much energy to counteract the infection (yes, immunity is energetically costly, which is why you get sick when you are run down). In turn, as the caribou population decreases, so does the population of parasites3. In cybernetics, such a toing-and-froing is known as ‘feedback’ (a system that controls itself, like a thermostat controls the temperature of a room) – a ‘density feedback’ (Figure 1) is the kind we are highlighting here. Read the rest of this entry »





Global Ecology postgraduate opportunities

12 08 2012

I should have published these ages ago, but like many things I have should have done earlier, I didn’t.

I also apologise for a bit of silence over the past week. After coming back from the ESP Conference in Portland, I’m now back at Stanford University working with Paul Ehrlich trying to finish our book (no sneak peaks yet, I’m afraid). I have to report that we’ve completed about about 75 % it, and I’m starting to feel like the end is in sight. We hope to have it published early in 2013.

So here they are – the latest 9 PhD offerings from us at the Global Ecology Laboratory. If you want to get more information, contact the first person listed as the first supervisor at the end of each project’s description.

1. Optimal survey and harvest models for South Australian macropods (I’ve advertised this before, but so far, no takers):

The South Australia Department of Environment, Water and Natural Resources (DEWNR) is custodian of a long-term macropod database derived from the State’s management of the commercial kangaroo harvest industry. The dataset entails aerial survey data for most of the State from 1978 to present, annual population estimates, quotas and harvests for three species: red kangaroo (Macropus rufus), western grey kangaroo (Macropus fuliginosus), and the euro (Macropus robustus erubescens).

DEWNR wishes to improve the efficiency of surveys and increase the precision of population estimates, as well as provide a more quantitative basis for setting harvest quotas.

We envisage that the PhD candidate will design and construct population models:

  • to predict population size/densities with associated uncertainty, linking fluctuations to environmental variability (including future climate change projections)
  • to evaluate the efficiency of spatially explicit aerial surveys
  • to estimate demographic parameters (e.g., survival rate) from life tables and
  • to estimate spatially explicit sustainable harvest quotas

 Supervisors: me, A/Prof. Phill Cassey, Dr Damien Fordham, Dr Brad Page (DEWNR), Professor Michelle Waycott (DEWNR).

2. Correcting for the Signor-Lipps effect

The ‘Signor-Lipps effect’ in palaeontology is the notion that the last organism of a given species will never be recorded as a fossil given the incomplete nature of the fossil record (the mirror problem is the ‘Jaanusson effect’, where the first occurrence is delayed past the true time of origination). This problem makes inference about the timing and speed of mass extinctions (and evolutionary diversification events) elusive. The problem is further complicated by the concept known as the ‘pull of the recent’, which states that the more time since an event occurred, the greater the probability that evidence of that event will have disappeared (e.g., erased by erosion, hidden by deep burial, etc.).

In a deep-time context, these problems confound the patterns of mass extinctions – i.e., the abruptness of extinction and the dynamics of recovery and speciation. This PhD project will apply a simulation approach to marine fossil time series (for genera and families, and some individual species) covering the Phanerozoic Aeon, as well as other taxa straddling the K-T boundary (Cretaceous mass extinction). The project will seek to correct for taphonomic biases and assess the degree to which extinction events for different major taxa were synchronous.

The results will also have implications for the famous Sepkoski curve, which describes the apparent logistic increase in marine species diversity over geological time with an approximate ‘carrying capacity’ reached during the Cenozoic. Despite recent demonstration that this increase is partially a taphonomic artefact, a far greater development and validation/sensitivity analysis of underlying statistical models is needed to resolve the true patterns of extinction and speciation over this period.

The approach will be to develop a series of models describing the interaction of the processes of speciation, local extinction and taphonomic ‘erasure’ (pull of the recent) to simulate how these processes interact to create the appearance of growth in numbers of taxa over time (Sepkoski curve) and the abruptness of mass extinction events. The candidate will estimate key parameters in the model to test whether the taphonomic effect is strong enough to be the sole explanation of the apparent temporal increase in species diversity, or whether true diversification accounts for this.

Supervisors: me, Prof. Barry Brook

3. Genotypic relationships of Australian rabbit populations and consequences for disease dynamics

Historical evidence suggests that there were multiple introduction events of European rabbits into Australia. In non-animal model weed systems it is clear that biocontrol efficacy is strongly influenced by the degree of genetic diversity and number of breed variants in the population.

The PhD candidate will build phylogenetic relationships for Australian rabbit populations and develop landscape genetic models for exploring the influence of myxomatosis and rabbit haemorrhagic disease virus (RHDV) on rabbit vital rates (survival, reproduction and dispersal) at regional and local scales. Multi-model synthesis will be used to quantify the relative roles of environment (including climate) and genotype on disease prevalence and virulence in rabbit populations.

Supervisors: A/Prof Phill Cassey, Dr Damien Fordham, Prof Barry Brook Read the rest of this entry »





Parts a whole do not make

17 02 2012

I’m particularly proud of our latest paper for three main reasons:  (1) Salva Herrando-Pérez, lead author and contributor-extraordinaire to CB, has worked extremely hard to get this one out; (2) it is published in a really good journal; and most importantly, (3) it’s the very first empirical demonstration over hundreds of species that just because you have a density effect on some vital rate (e.g., survival, fertility, dispersal), this in no way means you have any evidence at all for density dependence at the population level. Let us explain.

Quantifying variation in population size is an important element for explaining and predicting population dynamics. In models where a vital (demographic) rate responds to change in population size, those ‘density-dependent’ relationships are ecologically understood as being demographic signals of trophic and social interactions, such as parasitism, predation or competition for shelter, because the intensity of those interactions varies with population size.

In fact, density-dependent effects reflect the theoretical capacity of populations to adjust growth and rebound from low or high numbers – and so this concept has become an important metric in population management and conservation  (Eberhardt et al. 2008). Read the rest of this entry »