It’s a tough time for young conservation scientists

24 08 2021

Sure, it’s a tough time for everyone, isn’t it? But it’s a lot worse for the already disadvantaged, and it’s only going to go downhill from here. I suppose that most people who read this blog can certainly think of myriad ways they are, in fact, still privileged and very fortunate (I know that I am).

Nonetheless, quite a few of us I suspect are rather ground down by the onslaught of bad news, some of which I’ve been responsible for perpetuating myself. Add lock downs, dwindling job security, and the prospect of dying tragically due to lung infection, many have become exasperated.

I once wrote that being a conservation scientist is a particularly depressing job, because in our case, knowledge is a source of despair. But as I’ve shifted my focus from ‘preventing disaster’ to trying to lessen the degree of future shittyness, I find it easier to get out of bed in the morning.

What can we do in addition to shifting our focus to making the future a little less shitty than it could otherwise be? I have a few tips that you might find useful:

Read the rest of this entry »




… some (models) are useful

8 06 2021

As someone who writes a lot of models — many for applied questions in conservation management (e.g., harvest quotas, eradication targets, minimum viable population sizes, etc.), and supervises people writing even more of them, I’ve had many different experiences with their uptake and implementation by management authorities.

Some of those experiences have involved catastrophic failures to influence any management or policy. One particularly painful memory relates to a model we wrote to assist with optimising approaches to eradicate (or at least, reduce the densities of) feral animals in Kakadu National Park. We even wrote the bloody thing in Visual Basic (horrible coding language) so people could run the module in Excel. As far as I’m aware, no one ever used it.

Others have been accepted more readily, such as a shark-harvest model, which (I think, but have no evidence to support) has been used to justify fishing quotas, and one we’ve done recently for the eradication of feral pigs on Kangaroo Island (as yet unpublished) has led directly to increased funding to the agency responsible for the programme.

According to Altmetrics (and the online tool I developed to get paper-level Altmetric information quickly), only 3 of the 16 of what I’d call my most ‘applied modelling’ papers have been cited in policy documents:

Read the rest of this entry »




Mapping the ‘super-highways’ the First Australians used to cross the ancient land

4 05 2021

Author provided/The Conversation, Author provided


There are many hypotheses about where the Indigenous ancestors first settled in Australia tens of thousands of years ago, but evidence is scarce.

Few archaeological sites date to these early times. Sea levels were much lower and Australia was connected to New Guinea and Tasmania in a land known as Sahul that was 30% bigger than Australia is today.

Our latest research advances our knowledge about the most likely routes those early Australians travelled as they peopled this giant continent.


Read more: The First Australians grew to a population of millions, much more than previous estimates


We are beginning to get a picture not only of where those first people landed in Sahul, but how they moved throughout the continent.

Navigating the landscape

Modelling human movement requires understanding how people navigate new terrain. Computers facilitate building models, but they are still far from easy. We reasoned we needed four pieces of information: (1) topography; (2) the visibility of tall landscape features; (3) the presence of freshwater; and (4) demographics of the travellers.

We think people navigated in new territories — much as people do today — by focusing on prominent land features protruding above the relative flatness of the Australian continent. Read the rest of this entry »





The biggest and slowest don’t always bite it first

13 04 2021

For many years I’ve been interested in modelling the extinction dynamics of megafauna. Apart from co-authoring a few demographically simplified (or largely demographically free) models about how megafauna species could have gone extinct, I have never really tried to capture the full nuances of long-extinct species within a fully structured demographic framework.

That is, until now.

But how do you get the life-history data of an extinct animal that was never directly measured. Surely, things like survival, reproductive output, longevity and even environmental carrying capacity are impossible to discern, and aren’t these necessary for a stage-structured demographic model?

Thylacine mum & joey. Nellie Pease & CABAH

The answer to the first part of that question “it’s possible”, and to the second, it’s “yes”. The most important bit of information we palaeo modellers need to construct something that’s ecologically plausible for an extinct species is an estimate of body mass. Thankfully, palaeontologists are very good at estimating the mass of the things they dig up (with the associated caveats, of course). From such estimates, we can reconstruct everything from equilibrium densities, maximum rate of population growth, age at first breeding, and longevity.

But it’s more complicated than that, of course. In Australia anyway, we’re largely dealing with marsupials (and some monotremes), and they have a rather different life-history mode than most placentals. We therefore have to ‘correct’ the life-history estimates derived from living placental species. Thankfully, evolutionary biologists and ecologists have ways to do that too.

The Pleistocene kangaroo Procoptodon goliah, the largest and most heavily built of the  short-faced kangaroos, was the largest and most heavily built kangaroo known. It had an  unusually short, flat face and forwardly directed 
eyes, with a single large toe on each foot  (reduced from the more normal count of four). Each forelimb had two long, clawed fingers  that would have been used to bring leafy branches within reach.

So with a battery of ecological, demographic, and evolutionary tools, we can now create reasonable stochastic-demographic models for long-gone species, like wombat-like creatures as big as cars, birds more than two metres tall, and lizards more than seven metres long that once roamed the Australian continent. 

Ancient clues, in the shape of fossils and archaeological evidence of varying quality scattered across Australia, have formed the basis of several hypotheses about the fate of megafauna that vanished during a peak about 42,000 years ago from the ancient continent of Sahul, comprising mainland Australia, Tasmania, New Guinea and neighbouring islands.

There is a growing consensus that multiple factors were at play, including climate change, the impact of people on the environment, and access to freshwater sources.

Just published in the open-access journal eLife, our latest CABAH paper applies these approaches to assess how susceptible different species were to extinction – and what it means for the survival of species today. 

Using various characteristics such as body size, weight, lifespan, survival rate, and fertility, we (Chris Johnson, John Llewelyn, Vera Weisbecker, Giovanni Strona, Frédérik Saltré & me) created population simulation models to predict the likelihood of these species surviving under different types of environmental disturbance.

Simulations included everything from increasing droughts to increasing hunting pressure to see which species of 13 extinct megafauna (genera: Diprotodon, Palorchestes, Zygomaturus, Phascolonus, Procoptodon, Sthenurus, Protemnodon, Simosthenurus, Metasthenurus, Genyornis, Thylacoleo, Thylacinus, Megalibgwilia), as well as 8 comparative species still alive today (Vombatus, Osphranter, Notamacropus, Dromaius, Alectura, Sarcophilus, Dasyurus, Tachyglossus), had the highest chances of surviving.

We compared the results to what we know about the timing of extinction for different megafauna species derived from dated fossil records. We expected to confirm that the most extinction-prone species were the first species to go extinct – but that wasn’t necessarily the case.

While we did find that slower-growing species with lower fertility, like the rhino-sized wombat relative Diprotodon, were generally more susceptible to extinction than more-fecund species like the marsupial ‘tiger’ thylacine, the relative susceptibility rank across species did not match the timing of their extinctions recorded in the fossil record.

Indeed, we found no clear relationship between a species’ inherent vulnerability to extinction — such as being slower and heavier and/or slower to reproduce — and the timing of its extinction in the fossil record.

In fact, we found that most of the living species used for comparison — such as short-beaked echidnas, emus, brush turkeys, and common wombats — were more susceptible on average than their now-extinct counterparts.

Read the rest of this entry »




Need to predict population trends, but can’t code? No problem

2 12 2020

Yes, yes. I know. Another R Shiny app.

However, this time I’ve strayed from my recent bibliometric musings and developed something that’s more compatible with the core of my main research and interests.

Welcome to LeslieMatrixShiny!

Over the years I’ve taught many students the basics of population modelling, with the cohort-based approaches dominating the curriculum. Of these, the simpler ‘Leslie’ (age-classified) matrix models are both the easiest to understand and for which data can often be obtained without too many dramas.

But unless you’re willing to sit down and learn the code, they can be daunting to the novice.

Sure, there are plenty of software alternatives out there, such as Bob Lacy‘s Vortex (a free individual-based model available for PCs only), Resit Akçakaya & co’s RAMAS Metapop ($; PC only), Stéphane Legendre‘s Unified Life Models (ULM; open-source; all platforms), and Charles Todd‘s Essential (open-source; PC only) to name a few. If you’re already an avid R user and already into population modelling, you might be familiar with the population-modelling packages popdemo, OptiPopd, or sPop. I’m sure there are still other good resources out there of which I’m not aware.

But, even to install the relevant software or invoke particular packages in R takes a bit of time and learning. It’s probably safe to assume that many people find the prospect daunting.

It’s for this reason that I turned my newly acquired R Shiny skills to matrix population models so that even complete coding novices can run their own stochastic population models.

I call the app LeslieMatrixShiny.

Read the rest of this entry »




Collect and analyse your Altmetric data

17 11 2020

Last week I reported that I had finally delved into the world of R Shiny to create an app that calculates relative citation-based ranks for researchers.

I’m almost slightly embarrassed to say that Shiny was so addictive that I ended up making another app.

This new app takes any list of user-supplied digital object identifiers (doi) and fetches their Altmetric data for you.

Why might you be interested in a paper’s Altmetric data? Citations are only one measure of an article’s impact on the research community, whereas Altmetrics tend to indicate the penetration of the article’s findings to a much broader audience.

Altmetric is probably the leading way to gauge the ‘impact’ (attention) an article has commanded across all online sources, including news articles, tweets, Facebook entries, blogs, Wikipedia mentions and others.

And for those of us interested in influencing policy with our work, Altmetrics also collate citations arising from policy documents.

Read the rest of this entry »




The ε-index app: a fairer way to rank researchers with citation data

9 11 2020

Back in April I blogged about an idea I had to provide a more discipline-, gender-, and career stage-balanced way of ranking researchers using citation data.

Most of you are of course aware of the ubiquitous h-index, and its experience-corrected variant, the m-quotient (h-index ÷ years publishing), but I expect that you haven’t heard of the battery of other citation-based indices on offer that attempt to correct various flaws in the h-index. While many of them are major improvements, almost no one uses them.

Why aren’t they used? Most likely because they aren’t easy to calculate, or require trawling through both open-access and/or subscription-based databases to get the information necessary to calculate them.

Hence, the h-index still rules, despite its many flaws, like under-emphasising a researcher’s entire body of work, gender biases, and weighting towards people who have been at it longer. The h-index is also provided free of charge by Google Scholar, so it’s the easiest metric to default to.

So, how does one correct for at least some of these biases while still being able to calculate an index quickly? I think we have the answer.

Since that blog post back in April, a team of seven scientists and I from eight different science disciplines (archaeology, chemistry, ecology, evolution & development, geology, microbiology, ophthalmology, and palaeontology) refined the technique I reported back then, and have submitted a paper describing how what we call the ‘ε-index’ (epsilon index) performs.

Read the rest of this entry »




A fairer way to rank a researcher’s relative citation performance?

23 04 2020

runningI do a lot of grant assessments for various funding agencies, including two years on the Royal Society of New Zealand’s Marsden Fund Panel (Ecology, Evolution, and Behaviour), and currently as an Australian Research Council College Expert (not to mention assessing a heap of other grant applications).

Sometimes this means I have to read hundreds of proposals made up of even more researchers, all of whom I’m meant to assess for their scientific performance over a short period of time (sometimes only within a few weeks). It’s a hard job, and I doubt very much that there’s a completely fair way to rank a researcher’s ‘performance’ quickly and efficiently.

It’s for this reason that I’ve tried to find ways to rank people in the most objective way possible. This of course does not discount reading a person’s full CV and profile, and certainly taking into consideration career breaks, opportunities, and other extenuating circumstances. But I’ve tended to do a first pass based primarily on citation indices, and then adjust those according to the extenuating circumstances.

But the ‘first pass’ part of the equation has always bothered me. We know that different fields have different rates of citation accumulation, that citations accumulate with age (including the much heralded h-index), and that there are gender (and other) biases in citations that aren’t easily corrected.

I’ve generally relied on the ‘m-index’, which is simply one’s h-index divided by the number of years one has been publishing. While this acts as a sort of age correction, it’s still unsatisfactory, essentially because I’ve noticed that it tends to penalise early career researchers in particular. I’ve tried to account for this by comparing people roughly within the same phase of career, but it’s still a subjective exercise.

I’ve recently been playing with an alternative that I think might be a way forward. Bear with me here, for it takes a bit of explaining. Read the rest of this entry »





Did people or climate kill off the megafauna? Actually, it was both

10 12 2019

When freshwater dried up, so did many megafauna species.
Centre of Excellence for Australian Biodiversity and Heritage, Author provided

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Earth is now firmly in the grips of its sixth “mass extinction event”, and it’s mainly our fault. But the modern era is definitely not the first time humans have been implicated in the extinction of a wide range of species.

In fact, starting about 60,000 years ago, many of the world’s largest animals disappeared forever. These “megafauna” were first lost in Sahul, the supercontinent formed by Australia and New Guinea during periods of low sea level.

The causes of these extinctions have been debated for decades. Possible culprits include climate change, hunting or habitat modification by the ancestors of Aboriginal people, or a combination of the two.


Read more: What is a ‘mass extinction’ and are we in one now?


The main way to investigate this question is to build timelines of major events: when species went extinct, when people arrived, and when the climate changed. This approach relies on using dated fossils from extinct species to estimate when they went extinct, and archaeological evidence to determine when people arrived.


Read more: An incredible journey: the first people to arrive in Australia came in large numbers, and on purpose


Comparing these timelines allows us to deduce the likely windows of coexistence between megafauna and people.

We can also compare this window of coexistence to long-term models of climate variation, to see whether the extinctions coincided with or shortly followed abrupt climate shifts.

Data drought

One problem with this approach is the scarcity of reliable data due to the extreme rarity of a dead animal being fossilised, and the low probability of archaeological evidence being preserved in Australia’s harsh conditions. Read the rest of this entry »





First Australians arrived in large groups using complex technologies

18 06 2019

file-20190325-36276-12v4jq2

One of the most ancient peopling events of the great diaspora of anatomically modern humans out of Africa more than 50,000 years ago — human arrival in the great continent of Sahul (New Guinea, mainland Australia & Tasmania joined during periods of low sea level) — remains mysterious. The entry routes taken, whether migration was directed or accidental, and just how many people were needed to ensure population viability are shrouded by the mists of time. This prompted us to build stochastic, age-structured human population-dynamics models incorporating hunter-gatherer demographic rates and palaeoecological reconstructions of environmental carrying capacity to predict the founding population necessary to survive the initial peopling of late-Pleistocene Sahul.

As ecological modellers, we are often asked by other scientists to attempt to render the highly complex mechanisms of entire ecosystems tractable for virtual manipulation and hypothesis testing through the inevitable simplification that is ‘a model’. When we work with scientists studying long-since-disappeared ecosystems, the challenges multiply.

Add some multidisciplinary data and concepts into the mix, and the complexity can quickly escalate.

We do have, however, some powerful tools in our modelling toolbox, so as the Modelling Node for the Australian Research Council Centre of Excellence for Australian Biodiversity and Heritage (CABAH), our role is to link disparate fields like palaeontology, archaeology, geochronology, climatology, and genetics together with mathematical ‘glue’ to answer the big questions regarding Australia’s ancient past.

This is how we tackled one of these big questions: just how did the first anatomically modern Homo sapiens make it to the continent and survive?

At that time, Australia was part of the giant continent of Sahul that connected New Guinea, mainland Australia, and Tasmania at times of lower sea level. In fact, throughout most of last ~ 126,000 years (late Pleistocene and much of the Holocene), Sahul was the dominant landmass in the region (see this handy online tool for how the coastline of Sahul changed over this period).

Read the rest of this entry »





Legacy of human migration on the diversity of languages in the Americas

12 09 2018

quechua-foto-ale-glogsterThis might seem a little left-of-centre for CB.com subject matter, but hang in there, this does have some pretty important conservation implications.

In our quest to be as transdisciplinary as possible, I’ve team up with a few people outside my discipline to put together a PhD modelling project that could really help us understand how human colonisation shaped not only ancient ecosystems, but also our own ancient cultures.

Thanks largely to the efforts of Dr Frédérik Saltré here in the Global Ecology Laboratory, at Flinders University, and in collaboration with Dr Bastien Llamas (Australian Centre for Ancient DNA), Joshua Birchall (Museu Paraense Emílio Goeldi, Brazil), and Lars Fehren-Schmitz (University of California at Santa Cruz, USA), I think the student could break down a few disciplinary boundaries here and provide real insights into the causes and consequences of human expansion into novel environments.

Interested? See below for more details?

Languages are ‘documents of history’ and historical linguists have developed comparative methods to infer patterns of human prehistory and cultural evolution. The Americas present a more substantive diversity of indigenous language stock than any other continent; however, whether such a diversity arose from initial human migration pathways across the continent is still unknown, because the primary proxy used (i.e., archaeological evidence) to study modern human migration is both too incomplete and biased to inform any regional inference of colonisation trajectories. Read the rest of this entry »





Prioritising your academic tasks

18 04 2018

The following is an abridged version of one of the chapters in my recent book, The Effective Scientist, regarding how to prioritise your tasks in academia. For a more complete treatise of the issue, access the full book here.

splitting tasks

Splitting tasks. © René Campbell renecampbellart.com

How the hell do you balance all the requirements of an academic life in science? From actually doing the science, analysing the data, writing papers, reviewing, writing grants, to mentoring students — not to mention trying to have a modicum of a life outside of the lab — you can quickly end up feeling a little daunted. While there is no empirical formula that make you run your academic life efficiently all the time, I can offer a few suggestions that might make your life just a little less chaotic.

Priority 1: Revise articles submitted to high-ranked journals

Barring a family emergency, my top priority is always revising an article that has been sent back to me from a high-ranking journal for revisions. Spend the necessary time to complete the necessary revisions.

Priority 2: Revise articles submitted to lower-ranked journals

I could have lumped this priority with the previous, but I think it is necessary to distinguish the two should you find yourself in the fortunate position of having to do more than one revision at a time.

Priority 3: Experimentation and field work

Most of us need data before we can write papers, so this is high on my personal priority list. If field work is required, then obviously this will be your dominant preoccupation for sometimes extended periods. Many experiments can also be highly time-consuming, while others can be done in stages or run in the background while you complete other tasks.

Priority 4: Databasing

This one could be easily forgotten, but it is a task that can take up a disproportionate amount of your time if do not deliberately fit it into your schedule. Well-organised, abundantly meta-tagged, intuitive, and backed-up databases are essential for effective scientific analysis; good data are useless if you cannot find them or understand to what they refer. Read the rest of this entry »





The Effective Scientist

22 03 2018

final coverWhat is an effective scientist?

The more I have tried to answer this question, the more it has eluded me. Before I even venture an attempt, it is necessary to distinguish the more esoteric term ‘effective’ from the more pedestrian term ‘success’. Even ‘success’ can be defined and quantified in many different ways. Is the most successful scientist the one who publishes the most papers, gains the most citations, earns the most grant money, gives the most keynote addresses, lectures the most undergraduate students, supervises the most PhD students, appears on the most television shows, or the one whose results improves the most lives? The unfortunate and wholly unsatisfying answer to each of those components is ‘yes’, but neither is the answer restricted to the superlative of any one of those. What I mean here is that you need to do reasonably well (i.e., relative to your peers, at any rate) in most of these things if you want to be considered ‘successful’. The relative contribution of your performance in these components will vary from person to person, and from discipline to discipline, but most undeniably ‘successful’ scientists do well in many or most of these areas.

That’s the opening paragraph for my new book that has finally been release for sale today in the United Kingdom and Europe (the Australasian release is scheduled for 7 April, and 30 April for North America). Published by Cambridge University Press, The Effective ScientistA Handy Guide to a Successful Academic Career is the culmination of many years of work on all the things an academic scientist today needs to know, but was never taught formally.

Several people have asked me why I decided to write this book, so a little history of its genesis is in order. I suppose my over-arching drive was to create something that I sincerely wish had existed when I was a young scientist just starting out on the academic career path. I was focussed on learning my science, and didn’t necessarily have any formal instruction in all the other varied duties I’d eventually be expected to do well, from how to write papers efficiently, to how to review properly, how to manage my grant money, how to organise and store my data, how to run a lab smoothly, how to get the most out of a conference, how to deal with the media, to how to engage in social media effectively (even though the latter didn’t really exist yet at the time) — all of these so-called ‘extra-curricular’ activities associated with an academic career were things I would eventually just have to learn as I went along. I’m sure you’ll agree, there has to be a better way than just muddling through one’s career picking up haphazard experience. Read the rest of this entry »





Two new postdoctoral positions in ecological network & vegetation modelling announced

21 07 2017

19420366_123493528240028_621031473222812853_n

With the official start of the new ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH) in July, I am pleased to announce two new CABAH-funded postdoctoral positions (a.k.a. Research Associates) in my global ecology lab at Flinders University in Adelaide (Flinders Modelling Node).

One of these positions is a little different, and represents something of an experiment. The Research Associate in Palaeo-Vegetation Modelling is being restricted to women candidates; in other words, we’re only accepting applications from women for this one. In a quest to improve the gender balance in my lab and in universities in general, this is a step in the right direction.

The project itself is not overly prescribed, but we would like something along the following lines of inquiry: Read the rest of this entry »





Sensitive numbers

22 03 2016

toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





Ice Age? No. Abrupt warmings and hunting together polished off Holarctic megafauna

24 07 2015

Oh shit oh shit oh shit ...

Oh shit oh shit oh shit …

Did ice ages cause the Pleistocene megafauna to go extinct? Contrary to popular opinion, no, they didn’t. But climate change did have something to do with them, only it was global warming events instead.

Just out today in Science, our long-time-coming (9 years in total if you count the time from the original idea to today) paper ‘Abrupt warmings drove Late Pleistocene Holarctic megafaunal turnover‘ demonstrates for the first time that abrupt warming periods over the last 60,000 years were at least partially responsible for the collapse of the megafauna in Eurasia and North America.

You might recall that I’ve been a bit sceptical of claims that climate changes had much to do with megafauna extinctions during the Late Pleistocene and early Holocene, mainly because of the overwhelming evidence that humans had a big part to play in their demise (surprise, surprise). What I’ve rejected though isn’t so much that climate had nothing to do with the extinctions; rather, I took issue with claims that climate change was the dominant driver. I’ve also had problems with blanket claims that it was ‘always this’ or ‘always that’, when the complexity of biogeography and community dynamics means that it was most assuredly more complicated than most people think.

I’m happy to say that our latest paper indeed demonstrates the complexity of megafauna extinctions, and that it took a heap of fairly complex datasets and analyses to demonstrate. Not only were the data varied – the combination of scientists involved was just as eclectic, with ancient DNA specialists, palaeo-climatologists and ecological modellers (including yours truly) assembled to make sense of the complicated story that the data ultimately revealed. Read the rest of this entry »





School finishers and undergraduates ill-prepared for research careers

22 05 2014

bad mathsHaving been for years now at the pointy end of the educational pathway training the next generation of scientists, I’d like to share some of my observations regarding how well we’re doing. At least in Australia, my realistic assessment of science education is: not well at all.

I’ve been thinking about this for some time, but only now decided to put my thoughts into words as the train wreck of our current government lurches toward a future guaranteeing an even stupider society. Charging postgraduate students to do PhDs for the first time, encouraging a US-style system of wealth-based educational privilege, slashing education budgets and de-investing in science while promoting the belief in invisible spaghetti monsters from space, are all the latest in the Fiberal future nightmare that will change our motto to “Australia – the stupid country”.

As you can appreciate, I’m not filled with a lot of hope that the worrying trends I’ve observed over the past 10 years or so are going to get any better any time soon. To be fair though, the problems go beyond the latest stupidities of the Fiberal government.

My realisation that there was a problem has crystallised only recently as I began to notice that most of my lab members were not Australian. In fact, the percentage of Australian PhD students and post-doctoral fellows in the lab usually hovers around 20%. Another sign of a problem was that even when we advertised for several well-paid postdoctoral positions, not a single Australian made the interview list (in fact, few Australians applied at all). I’ve also talked to many of my colleagues around Australia in the field of quantitative ecology, and many lament the same general trend.

Is it just poor mathematical training? Yes and no. Australian universities have generally lowered their entry-level requirements for basic maths, thereby perpetuating the already poor skill base of school leavers. Why? Bums (that pay) on seats. This means that people like me struggle to find Australian candidates that can do the quantitative research we need done. We are therefore forced to look overseas. Read the rest of this entry »





Putting the ‘science’ in citizen science

30 04 2014

How to tell if a koala has been in your garden. © Great Koala Count

How to tell if a koala has been in your garden. © Great Koala Count

When I was in Finland last year, I had the pleasure of meeting Tomas Roslin and hearing him describe his Finland-wide citizen-science project on dung beetles. What impressed me most was that it completely flipped my general opinion about citizen science and showed me that the process can be useful.

I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »





Cleaning up the rubbish: Australian megafauna extinctions

15 11 2013

diprotodonA few weeks ago I wrote a post about how to run the perfect scientific workshop, which most of you thought was a good set of tips (bizarrely, one person was quite upset with the message; I saved him the embarrassment of looking stupid online and refrained from publishing his comment).

As I mentioned at the end of post, the stimulus for the topic was a particularly wonderful workshop 12 of us attended at beautiful Linnaeus Estate on the northern coast of New South Wales (see Point 5 in the ‘workshop tips’ post).

But why did a group of ecological modellers (me, Barry Brook, Salvador Herrando-Pérez, Fréd Saltré, Chris Johnson, Nick Beeton), geneticists, palaeontologists (Gav Prideaux), fossil dating specialists (Dizzy Gillespie, Bert Roberts, Zenobia Jacobs) and palaeo-climatologists (Michael Bird, Chris Turney [in absentia]) get together in the first place? Hint: it wasn’t just the for the beautiful beach and good wine.

I hate to say it – mainly because it deserves as little attention as possible – but the main reason is that we needed to clean up a bit of rubbish. The rubbish in question being the latest bit of excrescence growing on that accumulating heap produced by a certain team of palaeontologists promulgating their ‘it’s all about the climate or nothing’ broken record.

Read the rest of this entry »





Biogeography comes of age

22 08 2013

penguin biogeographyThis week has been all about biogeography for me. While I wouldn’t call myself a ‘biogeographer’, I certainly do apply a lot of the discipline’s techniques.

This week I’m attending the 2013 Association of Ecology’s (INTECOL) and British Ecological Society’s joint Congress of Ecology in London, and I have purposefully sought out more of the biogeographical talks than pretty much anything else because the speakers were engaging and the topics fascinating. As it happens, even my own presentation had a strong biogeographical flavour this year.

Although the species-area relationship (SAR) is only one small aspect of biogeography, I’ve been slightly amazed that after more than 50 years since MacArthur & Wilson’s famous book, our discipline is still obsessed with SAR.

I’ve blogged about SAR issues before – what makes it so engaging and controversial is that SAR is the principal tool to estimate overall extinction rates, even though it is perhaps one of the bluntest tools in the ecological toolbox. I suppose its popularity stems from its superficial simplicity – as the area of an (classically oceanic) island increases, so too does the total number of species it can hold. The controversies surrounding such as basic relationship centre on describing the rate of that species richness increase with area – in other words, just how nonlinear the SAR itself is.

Even a cursory understanding of maths reveals the importance of estimating this curve correctly. As the area of an ‘island’ (habitat fragment) decreases due to human disturbance, estimating how many species end up going extinct as a result depends entirely on the shape of the SAR. Get the SAR wrong, and you can over- or under-estimate the extinction rate. This was the crux of the palaver over Fangliang He (not attending INTECOL) & Stephen Hubbell’s (attending INTECOL) paper in Nature in 2011.

The first real engagement of SAR happened with John Harte’s maximum entropy talk in the process macroecology session on Tuesday. What was notable to me was his adamant claim that the power-law form of SAR should never be used, despite its commonness in the literature. I took this with a grain of salt because I know all about how messy area-richness data can be, and why one needs to consider alternate models (see an example here). But then yesterday I listened to one of the greats of biogeography – Robert Whittaker – who said pretty much the complete opposite of Harte’s contention. Whittaker showed results from one of his papers last year that the power law was in fact the most commonly supported SAR among many datasets (granted, there was substantial variability in overall model performance). My conclusion remains firm – make sure you use multiple models for each individual dataset and try to infer the SAR from model-averaging. Read the rest of this entry »