An eye on the past: a view to the future

29 11 2021

originally published in Brave Minds, Flinders University’s research-news publication (text by David Sly)

Clues to understanding human interactions with global ecosystems already exist. The challenge is to read them more accurately so we can design the best path forward for a world beset by species extinctions and the repercussions of global warming.


This is the puzzle being solved by Professor Corey Bradshaw, head of the Global Ecology Lab at Flinders University. By developing complex computer modelling and steering a vast international cohort of collaborators, he is developing research that can influence environmental policy — from reconstructing the past to revealing insights of the future.

As an ecologist, he aims both to reconstruct and project how ecosystems adapt, how they are maintained, and how they change. Human intervention is pivotal to this understanding, so Professor Bradshaw casts his gaze back to when humans first entered a landscape – and this has helped construct an entirely fresh view of how Aboriginal people first came to Australia, up to 75,000 years ago.

Two recent papers he co-authored — ‘Stochastic models support rapid peopling of Late Pleistocene Sahul‘, published in Nature Communications, and ‘Landscape rules predict optimal super-highways for the first peopling of Sahul‘ published in Nature Human Behaviour — showed where, how and when Indigenous Australians first settled in Sahul, which is the combined mega-continent that joined Australia with New Guinea in the Pleistocene era, when sea levels were lower than today.

Professor Bradshaw and colleagues identified and tested more than 125 billion possible pathways using rigorous computational analysis in the largest movement-simulation project ever attempted, with the pathways compared to the oldest known archaeological sites as a means of distinguishing the most likely routes.

The study revealed that the first Indigenous people not only survived but thrived in harsh environments, providing further evidence of the capacity and resilience of the ancestors of Indigenous people, and suggests large, well-organised groups were able to navigate tough terrain.

Read the rest of this entry »




And this little piggy went extinct

24 11 2021

Back in June of this year I wrote (whinged) about the disappointment of writing a lot of ecological models that were rarely used to assist real-world wildlife management. However, I did hint that another model I wrote had assistance one government agency with pig management on Kangaroo Island.

Well, now that report has been published online and I’m permitted to talk about it. I’m also very happy to report that, in the words of the Government of South Australia’s Department of Primary Industries and Regions (PIRSA),

Modelling by the Flinders University Global Ecology Laboratory shows the likelihood and feasibility of feral pig eradication under different funding and eradication scenarios. With enough funding, feral pigs could be eradicated from Kangaroo Island in 2 years.

This basically means that because of the model, PIRSA was successful in obtaining enough funding to pretty much ensure that the eradication of feral pigs from Kangaroo Island will be feasible!

Why is this important to get rid of feral pigs? They are a major pest on the Island, causing severe economic and environmental impacts both to farms and native ecosystems. On the agricultural side of things, they prey on newborn lambs, eat crops, and compete with livestock for pasture. Feral pigs damage natural habitats by up-rooting vegetation and fouling waterholes. They can also spread weeds and damage infrastructure, as well as act as hosts of parasites and diseases (e.g., leptospirosis, tuberculosis, foot-and-mouth disease) that pose serious threats to industry, wildlife, and even humans.

Read the rest of this entry »




Free resources for learning (and getting better with) R

15 11 2021

While I’m currently in Github mode (see previous post), I thought I’d share a list of resources I started putting together for learning and upskilling in the R programming language.

If you don’t know what R is, this probably won’t be of much use to you. But if you are a novice user, want to improve your skills, or just have access to a kick-arse list of cheatsheets, then this Github repository should be useful.

I started putting this list together for members of the Australian Research Council Centre of Excellence for Australian Biodiversity and Heritage, but I see no reason why it should be limited to that particular group of people.

I don’t claim that this list is exhaustive, nor do I vouch for the quality of any of the listed resources. Some of them are deprecated and fairly old too, so be warned.

The first section includes online resources such as short courses, reference guides, analysis demos, tips for more-efficient programming, better plotting guidelines, as well as some R-related mini-universes like markdown, ggplot, Shiny, and tidyverse.

The section following is a list of popular online communities, list-servers, and blogs that help R users track down advice for solving niggly coding and statistical problems.

The next section is a whopping-great archive of R cheatsheets, covering everything from the basics, plotting, cartography, databasing, applications, time series analysis, machine learning, time & date, building packages, parallel computing, resampling methods, markdown, and more.

Read the rest of this entry »




Want a permanent DOI assigned to your data and code? Follow this simple recipe

2 11 2021

These days with data and code often required to be designated as open-source, licenced, and fully trackable for most manuscript submissions to a peer-reviewed journal, it’s easy to get lost in the multitude of platforms and options available. In most cases, we no longer have much of a choice to do so, even if you are reticent (although the benefits of posting your data and code online immediately far outweigh any potential disadvantages).

But do you post your data and code on the Open Science Framework (free), Github (free), Figshare (free), Zenodo (free, but donations encouraged), Dryad ($), or Harvard Dataverse (free) (and so on, and so on, …)? Pick your favourite. Another issue that arises is that even if you have solved the first dilemma, how do you obtain a digital object identifier (DOI) for your data and/or code?

Again, there are many ways to do this, and some methods are more automated than other. That said, I do have a preference that is rather easy to implement that I’d thought I’d share with you here.

The first requirement is getting yourself a (free) Github account. What’s Github? Github is one of the world’s largest communities of developers, where code for all manner of types and uses can be developed, shared, updated, collaborated, shipped, and maintained. It might seem a bit overwhelming for non-developers, but if you strip it down to its basics, it’s straightforward to use as a simple repository for your code and data. Of course, Github is designed for so much more than just this (software development collaboration being one of the main ones), but you don’t need to worry about that for now.

Step 1

Once you create an account, you can start creating ‘repositories’, which are essentially just sections of your account dedicated to specific code (and data). I mostly code in R, so I upload my R code text files and associated datasets to these repositories, and spend a good deal of effort on making the Readme.md file highly explanatory and easy to follow. You can check out some of mine here.

Ok. So, you have a repository with some code and data, you’ve explained what’s going on and how the code works in the Readme file, and now you want a permanent DOI that will point to the repository (and any updates) for all time.

Github doesn’t do this by itself, but it integrates seamlessly with another platform — Zenodo — that does. Oh no! Not another platform! Yes, I’m afraid so, but it’s not as painful as you might expect.

Read the rest of this entry »