Why populations can’t be saved by a single breeding pair

3 04 2018

620x349

© Reuters/Thomas Mukoya

I published this last week on The Conversation, and now reproducing it here for CB.com readers.

 

Two days ago, the last male northern white rhino (Ceratotherium simum cottoni) died. His passing leaves two surviving members of his subspecies: both females who are unable to bear calves.

Even though it might not be quite the end of the northern white rhino because of the possibility of implanting frozen embryos in their southern cousins (C. simum simum), in practical terms, it nevertheless represents the end of a long decline for the subspecies. It also raises the question: how many individuals does a species need to persist?

Fiction writers have enthusiastically embraced this question, most often in the post-apocalypse genre. It’s a notion with a long past; the Adam and Eve myth is of course based on a single breeding pair populating the entire world, as is the case described in the Ragnarok, the final battle of the gods in Norse mythology.

This idea dovetails neatly with the image of Noah’s animals marching “two by two” into the Ark. But the science of “minimum viable populations” tells us a different story.

No inbreeding, please

The global gold standard used to assess the extinction risk of any species is the International Union for the Conservation of Nature (IUCN) Red List of Threatened Species. Read the rest of this entry »





Protecting one of the world’s marine wonders

17 06 2017
IMG_6789

© CJA Bradshaw

While I’m in transit (yet a-bloody-gain) to Helsinki, I wanted to take this opportunity to reflect on one of the most inspiring eco-tourism experiences I recently had in South Australia.

If you are South Australian and have even the slightest interest in wildlife, you will have of course at least heard of the awe-inspiring mass breeding aggregation of giant cuttlefish (Sepia apama) that occur in May-July every year in upper Spencer Gulf near the small town of Whyalla. If you have been lucky enough to go there and see these amazing creatures themselves, then you know exactly what I’m talking about. And if you haven’t yet been there, take it from me that it is so very much worth it to attempt the voyage.

DCIM100GOPROGOPR0121.

Father-daughter giant-cuttlefish-snorkelling selfie. © CJA Bradshaw

Despite having lived in South Australia for nearly a decade now, I only got my chance to see these wonderful creatures when a father at my daughter’s school organised a school trip. After driving for five hours from Adelaide to Whyalla, we hired our snorkelling gear and got into the water the very next morning. Read the rest of this entry »





Sensitive numbers

22 03 2016
toondoo.com

A sensitive parameter

You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

In the case of more complex multivariate correlation models, then sometimes the coefficient of determination is insufficient, in which case you might need to rely on statistics such as the proportion of deviance explained, or the marginal and/or conditional variance explained.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »





Avoiding genetic rescue not justified on genetic grounds

12 03 2015
Genetics to the rescue!

Genetics to the rescue!

I had the pleasure today of reading a new paper by one of the greatest living conservation geneticists, Dick Frankham. As some of CB readers might remember, I’ve also published some papers with Dick over the last few years, with the most recent challenging the very basis for the IUCN Red List category thresholds (i.e., in general, they’re too small).

Dick’s latest paper in Molecular Ecology is a meta-analysis designed to test whether there are any genetic grounds for NOT attempting genetic rescue for inbreeding-depressed populations. I suppose a few definitions are in order here. Genetic rescue is the process, either natural or facilitated, where inbred populations (i.e., in a conservation sense, those comprising too many individuals bonking their close relatives because the population in question is small) receive genes from another population such that their overall genetic diversity increases. In the context of conservation genetics, ‘inbreeding depression‘ simply means reduced biological fitness (fertility, survival, longevity, etc.) resulting from parents being too closely related.

Seems like an important thing to avoid, so why not attempt to facilitate gene flow among populations such that those with inbreeding depression can be ‘rescued’? In applied conservation, there are many reasons given for not attempting genetic rescue: Read the rest of this entry »





We generally ignore the big issues

11 08 2014

I’ve had a good week at Stanford University with Paul Ehrlich where we’ve been putting the final touches1 on our book. It’s been taking a while to put together, but we’re both pretty happy with the result, which should be published by The University of Chicago Press within the first quarter of 2015.

It has indeed been a pleasure and a privilege to work with one of the greatest thinkers of our age, and let me tell you that at 82, he’s still a force with which to be reckoned. While I won’t divulge much of our discussions here given they’ll appear soon-ish in the book, I did want to raise one subject that I think we all need to think about a little more.

The issue is what we, as ecologists (I’m including conservation scientists here), choose to study and contemplate in our professional life.

I’m just as guilty as most of the rest of you, but I argue that our discipline is caught in a rut of irrelevancy on the grander scale. We spend a lot of time refining the basics of what we essentially already know pretty well. While there will be an eternity of processes to understand, species to describe, and relationships to measure, can our discipline really afford to avoid the biggest issues while biodiversity (and our society included) are flushed down the drain?

Read the rest of this entry »





50/500 or 100/1000 debate not about time frame

26 06 2014

Not enough individualsAs you might recall, Dick Frankham, Barry Brook and I recently wrote a review in Biological Conservation challenging the status quo regarding the famous 50/500 ‘rule’ in conservation management (effective population size [Ne] = 50 to avoid inbreeding depression in the short-term, and Ne = 500 to retain the ability to evolve in perpetuity). Well, it inevitably led to some comments arising in the same journal, but we were only permitted by Biological Conservation to respond to one of them. In our opinion, the other comment was just as problematic, and only further muddied the waters, so it too required a response. In a first for me, we have therefore decided to publish our response on the arXiv pre-print server as well as here on ConservationBytes.com.

50/500 or 100/1000 debate is not about the time frame – Reply to Rosenfeld

cite as: Frankham, R, Bradshaw CJA, Brook BW. 2014. 50/500 or 100/1000 debate is not about the time frame – Reply to Rosenfeld. arXiv: 1406.6424 [q-bio.PE] 25 June 2014.

The Letter from Rosenfeld (2014) in response to Jamieson and Allendorf (2012) and Frankham et al. (2014) and related papers is misleading in places and requires clarification and correction, as follows: Read the rest of this entry »





We’re sorry, but 50/500 is still too few

28 01 2014

too fewSome of you who are familiar with my colleagues’ and my work will know that we have been investigating the minimum viable population size concept for years (see references at the end of this post). Little did I know when I started this line of scientific inquiry that it would end up creating more than a few adversaries.

It might be a philosophical perspective that people adopt when refusing to believe that there is any such thing as a ‘minimum’ number of individuals in a population required to guarantee a high (i.e., almost assured) probability of persistence. I’m not sure. For whatever reason though, there have been some fierce opponents to the concept, or any application of it.

Yet a sizeable chunk of quantitative conservation ecology develops – in various forms – population viability analyses to estimate the probability that a population (or entire species) will go extinct. When the probability is unacceptably high, then various management approaches can be employed (and modelled) to improve the population’s fate. The flip side of such an analysis is, of course, seeing at what population size the probability of extinction becomes negligible.

‘Negligible’ is a subjective term in itself, just like the word ‘very‘ can mean different things to different people. This is why we looked into standardising the criteria for ‘negligible’ for minimum viable population sizes, almost exactly what the near universally accepted IUCN Red List attempts to do with its various (categorical) extinction risk categories.

But most reasonable people are likely to agree that < 1 % chance of going extinct over many generations (40, in the case of our suggestion) is an acceptable target. I’d feel pretty safe personally if my own family’s probability of surviving was > 99 % over the next 40 generations.

Some people, however, baulk at the notion of making generalisations in ecology (funny – I was always under the impression that was exactly what we were supposed to be doing as scientists – finding how things worked in most situations, such that the mechanisms become clearer and clearer – call me a dreamer).

So when we were attacked in several high-profile journals, it came as something of a surprise. The latest lashing came in the form of a Trends in Ecology and Evolution article. We wrote a (necessarily short) response to that article, identifying its inaccuracies and contradictions, but we were unable to expand completely on the inadequacies of that article. However, I’m happy to say that now we have, and we have expanded our commentary on that paper into a broader review. Read the rest of this entry »





Software tools for conservation biologists

8 04 2013

computer-programmingGiven the popularity of certain prescriptive posts on ConservationBytes.com, I thought it prudent to compile a list of software that my lab and I have found particularly useful over the years. This list is not meant to be comprehensive, but it will give you a taste for what’s out there. I don’t list the plethora of conservation genetics software that is available (generally given my lack of experience with it), but if this is your chosen area, I’d suggest starting with Dick Frankham‘s excellent book, An Introduction to Conservation Genetics.

1. R: If you haven’t yet loaded the open-source R programming language on your machine, do it now. It is the single-most-useful bit of statistical and programming software available to anyone anywhere in the sciences. Don’t worry if you’re not a fully fledged programmer – there are now enough people using and developing sophisticated ‘libraries’ (packages of functions) that there’s pretty much an application for everything these days. We tend to use R to the exclusion of almost any other statistical software because it makes you learn the technique rather than just blindly pressing the ‘go’ button. You could also stop right here – with R, you can do pretty much everything else that the software listed below does; however, you have to be an exceedingly clever programmer and have a lot of spare time. R can also sometimes get bogged down with too much filled RAM, in which case other, compiled languages such as PYTHON and C# are useful.

2. VORTEX/OUTBREAK/META-MODEL MANAGER, etc.: This suite of individual-based projection software was designed by Bob Lacy & Phil Miller initially to determine the viability of small (usually captive) populations. The original VORTEX has grown into a multi-purpose, powerful and sophisticated population viability analysis package that now links to its cousin applications like OUTBREAK (the only off-the-shelf epidemiological software in existence) via the ‘command centre’ META-MODEL MANAGER (see an examples here and here from our lab). There are other add-ons that make almost any population projection and hindcasting application possible. And it’s all free! (warning: currently unavailable for Mac, although I’ve been pestering Bob to do a Mac version).

3. RAMAS: RAMAS is the go-to application for spatial population modelling. Developed by the extremely clever Resit Akçakaya, this is one of the only tools that incorporates spatial meta-population aspects with formal, cohort-based demographic models. It’s also very useful in a climate-change context when you have projections of changing habitat suitability as the base layer onto which meta-population dynamics can be modelled. It’s not free, but it’s worth purchasing. Read the rest of this entry »





Want to work with us?

22 03 2013
© Beboy-Fotolia

© Beboy-Fotolia

Today we announced a HEAP of positions in our Global Ecology Lab for hot-shot, up-and-coming ecologists. If you think you’ve got what it takes, I encourage you to apply. The positions are all financed by the Australian Research Council from grants that Barry Brook, Phill Cassey, Damien Fordham and I have all been awarded in the last few years. We decided to do a bulk advertisement so that we maximise the opportunity for good science talent out there.

We’re looking for bright, mathematically adept people in palaeo-ecology, wildlife population modelling, disease modelling, climate change modelling and species distribution modelling.

The positions are self explanatory, but if you want more information, just follow the links and contacts given below. For my own selfish interests, I provide a little more detail for two of the positions for which I’m directly responsible – but please have a look at the lot.

Good luck!

CJA Bradshaw

Job Reference Number: 17986 & 17987

The world-leading Global Ecology Group within the School of Earth and Environmental Sciences currently has multiple academic opportunities. For these two positions, we are seeking a Postdoctoral Research Associate and a Research Associate to work in palaeo-ecological modelling. Read the rest of this entry »





Science immortalised in cartoon

1 02 2013

Well, this is a first for me (us).

I’ve never had a paper of ours turned into a cartoon. The illustrious and brilliant ‘First Dog on the Moon‘ (a.k.a. Andrew Marlton) who is chief cartoonist for Australia’s irreverent ‘Crikey‘ online news magazine just parodied our Journal of Animal Ecology paper No need for disease: testing extinction hypotheses for the thylacine using multispecies metamodels that I wrote about a last month here on ConservationBytes.com.

Needless to say, I’m chuffed as a chuffed thing.

Enjoy!

Stripey





No need for disease

7 01 2013

dead or alive thylacineIt’s human nature to abhor admitting an error, and I’d wager that it’s even harder for the average person (psycho- and sociopaths perhaps excepted) to admit being a bastard responsible for the demise of someone, or something else. Examples abound. Think of much of society’s unwillingness to accept responsibility for global climate disruption (how could my trips to work and occasional holiday flight be killing people on the other side of the planet?). Or, how about fishers refusing to believe that they could be responsible for reductions in fish stocks? After all, killing fish couldn’t possibly …er, kill fish? Another one is that bastion of reverse racism maintaining that ancient or traditionally living peoples (‘noble savages’) could never have wiped out other species.

If you’re a rational person driven by evidence rather than hearsay, vested interest or faith, then the above examples probably sound ridiculous. But rest assured, millions of people adhere to these points of view because of the phenomenon mentioned in the first sentence above. With this background then, I introduce a paper that’s almost available online (i.e., we have the DOI, but the online version is yet to appear). Produced by our extremely clever post-doc, Tom Prowse, the paper is entitled: No need for disease: testing extinction hypotheses for the thylacine using multispecies metamodels, and will soon appear in Journal of Animal Ecology.

Of course, I am biased being a co-author, but I think this paper really demonstrates the amazing power of retrospective multi-species systems modelling to provide insight into phenomena that are impossible to test empirically – i.e., questions of prehistoric (and in some cases, even data-poor historic) ecological change. The megafauna die-off controversy is one we’ve covered before here on ConservationBytes.com, and this is a related issue with respect to a charismatic extinction in Australia’s recent history – the loss of the Tasmanian thylacine (‘tiger’, ‘wolf’ or whatever inappropriate eutherian epithet one unfortunately chooses to apply). Read the rest of this entry »





Conservation catastrophes

22 02 2012

David Reed

The title of this post serves two functions: (1) to introduce the concept of ecological catastrophes in population viability modelling, and (2) to acknowledge the passing of the bloke who came up with a clever way of dealing with that uncertainty.

I’ll start with latter first. It came to my attention late last year that a fellow conservation biologist colleague, Dr. David Reed, died unexpectedly from congestive heart failure. I did not really mourn his passing, for I had never met him in person (I believe it is disingenuous, discourteous, and slightly egocentric to mourn someone who you do not really know personally – but that’s just my opinion), but I did think at the time that the conservation community had lost another clever progenitor of good conservation science. As many CB readers already know, we lost a great conservation thinker and doer last year, Professor Navjot Sodhi (and that, I did take personally). Coincidentally, both Navjot and David died at about the same age (49 and 48, respectively). I hope that the being in one’s late 40s isn’t particularly presaged for people in my line of business!

My friend, colleague and lab co-director, Professor Barry Brook, did, however, work a little with David, and together they published some pretty cool stuff (see References below). David was particularly good at looking for cross-taxa generalities in conservation phenomena, such as minimum viable population sizes, effects of inbreeding depression, applications of population viability analysis and extinction risk. But more on some of that below. Read the rest of this entry »





Better SAFE than sorry

30 11 2011

Last day of November already – I am now convinced that my suspicions are correct: time is not constant and in fact accelerates as you age (in mathematical terms, a unit of time becomes a progressively smaller proportion of the time elapsed since your birth, so this makes sense). But, I digress…

This short post will act mostly as a spruik for my upcoming talk at the International Congress for Conservation Biology next week in Auckland (10.30 in New Zealand Room 2 on Friday, 9 December) entitled: Species Ability to Forestall Extinction (SAFE) index for IUCN Red Listed species. The post also sets a bit of the backdrop to this paper and why I think people might be interested in attending.

As regular readers of CB will know, we published a paper this year in Frontiers in Ecology and the Environment describing a relatively simple metric we called SAFE (Species Ability to Forestall Extinction) that could enhance the information provided by the IUCN Red List of Threatened Species for assessing relative extinction threat. I won’t go into all the detail here (you can read more about it in this previous post), but I do want to point out that it ended up being rather controversial.

The journal ended up delaying final publication because there were 3 groups who opposed the metric rather vehemently, including people who are very much in the conservation decision-making space and/or involved directly with the IUCN Red List. The journal ended up publishing our original paper, the 3 critiques, and our collective response in the same issue (you can read these here if you’re subscribed, or email me for a PDF reprint). Again, I won’t go into an detail here because our arguments are clearly outlined in the response.

What I do want to highlight is that even beyond the normal in-print tête-à-tête the original paper elicited, we were emailed by several people behind the critiques who were apparently unsatisfied with our response. We found this slightly odd, because many of the objections just kept getting re-raised. Of particular note were the accusations that: Read the rest of this entry »





Sustainable kangaroo harvests

10 11 2011

When I first started this blog back in 2008, I extolled the conservation virtues of eating kangaroos over cattle and sheep. Now I want to put my academic money where my mouth is, and do some kangaroo harvest research.

Thanks to the South Australia Department of Environment and Natural Resources  (DENR) and the commercial kangaroo harvest industry, in conjunction with the University of Adelaide, I’m pleased to announce a new scholarship for a PhD candidate to work on a project entitled Optimal survey and harvest models for South Australian macropods based at the University of Adelaide’s School of Earth and Environmental Sciences.

DENR is custodian of a long-term macropod database derived from the State’s management of the commercial kangaroo harvest industry. The dataset entails aerial survey data for most of the State from 1978 to present, annual population estimates, quotas and harvests for three species: red kangaroo (Macropus rufus), western grey kangaroo (Macropus fuliginosus), and the euro (Macropus robustus erubescens). Read the rest of this entry »





Not magic, but necessary

18 10 2011

In April this year, some American colleagues of ours wrote a rather detailed, 10-page article in Trends in Ecology and Evolution that attacked our concept of generalizing minimum viable population (MVP) size estimates among species. Steve Beissinger of the University of California at Berkeley, one of the paper’s co-authors, has been a particularly vocal adversary of some of the applications of population viability analysis and its child, MVP size, for many years. While there was some interesting points raised in their review, their arguments largely lacked any real punch, and they essentially ended up agreeing with us.

Let me explain. Today, our response to that critique was published online in the same journal: Minimum viable population size: not magic, but necessary. I want to take some time here to summarise the main points of contention and our rebuttal.

But first, let’s recap what we have been arguing all along in several papers over the last few years (i.e., Brook et al. 2006; Traill et al. 2007, 2010; Clements et al. 2011) – a minimum viable population size is the point at which a declining population becomes a small population (sensu Caughley 1994). In other words, it’s the point at which a population becomes susceptible to random (stochastic) events that wouldn’t otherwise matter for a small population.

Consider the great auk (Pinguinus impennis), a formerly widespread and abundant North Atlantic species that was reduced by intensive hunting throughout its range. How did it eventually go extinct? The last remaining population blew up in a volcanic explosion off the coast of Iceland (Halliday 1978). Had the population been large, the small dent in the population due to the loss of those individuals would have been irrelevant.

But what is ‘large’? The empirical evidence, as we’ve pointed out time and time again, is that large = thousands, not hundreds, of individuals.

So this is why we advocate that conservation targets should aim to keep at or recover to the thousands mark. Less than that, and you’re playing Russian roulette with a species’ existence. Read the rest of this entry »





Species’ Ability to Forestall Extinction – AudioBoo

8 04 2011

Here’s a little interview I just did on the SAFE index with ABC AM:

Not a bad job, really.

And here’s another one from Radio New Zealand:

CJA Bradshaw





S.A.F.E. = Species Ability to Forestall Extinction

8 01 2011

Note: I’ve just rehashed this post (30/03/2011) because the paper is now available online (see comment stream). Stay tuned for the media release next week. – CJAB

I’ve been more or less underground for the last 3 weeks. It has been a wonderful break (mostly) from the normally hectic pace of academic life. Thanks for all those who remain despite the recent silence.

© Ezprezzo.com

But I’m back now with a post about a paper we’ve just had accepted in Frontiers in Ecology and Environment. In my opinion it’s a leap forward in how we measure relative threat risk among species, despite some criticism.

I’ve written in past posts about the ‘magic’ minimum number of individuals that should be in a population to reduce the chance of extinction from random events. The so-called ‘minimum viable population (MVP) size’ is basically the abundance of a (connected) population below which random events take over from factors causing sustained declines (Caughley’s distinction between the ‘declining’ and ‘small’ population paradigms).

Up until the last few years, the MVP size was considered to be a population- or species-specific value, and it required very detailed demographic, genetic and biogeographical data to estimate – not something that biologists tend to have at their fingertips for most high-risk species. However, several papers published by our group (Minimum viable population size and global extinction risk are unrelated, Minimum viable population size: a meta-analysis of 30 years of published estimates and Pragmatic population viability targets in a rapidly changing world) have shown that there is in fact little variation in this number among the best-studied species; both demographic and genetic data support a number of around 5000 to avoid crossing the deadly threshold.

Now the fourth paper in this series has just been accepted (sorry, no link yet, but I’ll let you all know as soon as it is available), and it was organised and led by Reuben Clements, and co-written by me, Barry Brook and Bill Laurance.

The idea is fairly simple and it somewhat amazes me that it hasn’t been implemented before. The SAFE (Species Ability to Forestall Extinction) index is simply the distance a population is (in terms of abundance) from its MVP. In the absence of a species-specific value, we used the 5000-individual threshold. Thus, Read the rest of this entry »





The conservation biologist’s toolbox

31 08 2010

Quite some time ago I blogged about a ‘new’ book published by Oxford University Press and edited by Navjot Sodhi and Paul Ehrlich called Conservation Biology for All in which Barry Brook and I wrote a chapter entitled The conservation biologist’s toolbox – principles for the design and analysis of conservation studies.

More recently, I attended the 2010 International Meeting of the Association for Tropical Biology and Conservation (ATBC) in Bali where I gave a 30-minute talk about the chapter, and I was overwhelmed with positive responses from the audience. The only problem was that 30 minutes wasn’t even remotely long enough to talk about all the topics we covered in the chapter, and I had to skip over a lot of material.

So…, I’ve blogged about the book, and now I thought I’d blog about the chapter.

The topics we cover are varied, but we really only deal with the ‘biological’ part of conservation biology, even though the field incorporates many other disciplines. Indeed, we write:

“Conservation biology” is an integrative branch of biological science in its own right; yet, it borrows from most disciplines in ecology and Earth systems science; it also embraces genetics, dabbles in physiology and links to veterinary science and human medicine. It is also a mathematical science because nearly all measures are quantified and must be analyzed mathematically to tease out pattern from chaos; probability theory is one of the dominant mathematical disciplines conservation biologists regularly use. As rapid human-induced global climate change becomes one of the principal concerns for all biologists charged with securing and restoring biodiversity, climatology is now playing a greater role. Conservation biology is also a social science, touching on everything from anthropology, psychology, sociology, environmental policy, geography, political science, and resource management. Because conservation biology deals primarily with conserving life in the face of anthropogenically induced changes to the biosphere, it also contains an element of economic decision making.”

And we didn’t really cover any issues in the discipline of conservation planning (that is a big topic indeed and a good starting point for this can be found by perusing The Ecology Centre‘s website). So what did we cover? The following main headings give the general flavour: Read the rest of this entry »





Linking disease, demography and climate

1 08 2010

Last week I mentioned that a group of us from Australia were travelling to Chicago to work with Bob Lacy, Phil Miller, JP Pollak and Resit Akcakaya to make some pretty exciting developments in next-generation conservation ecology and management software. Also attending were Barry Brook, our postdocs: Damien Fordham, Thomas Prowse and Mike Watts, our colleague (and former postdoc) Clive McMahon, and a student of Phil’s, Michelle Verant. At the closing of the week-long workshop, I thought I’d share my thoughts on how it all went.

In a word, it was ‘productive’. It’s not often that you can spend 1 week locked in a tiny room with 10 other geeks and produce so many good and state-of-the-art models, but we certainly achieved more than we had anticipated.

Let me explain in brief why it’s so exciting. First, I must say that even the semi-quantitative among you should be ready for the appearance of ‘Meta-Model Manager (MMM)’ in the coming months. This clever piece of software was devised by JP, Bob and Phil to make disparate models ‘talk’ to each other during a population projection run. We had dabbled with MMM a little last year, but its value really came to light this week.

We used MMM to combine several different models that individually fail to capture the full behaviour of a population. Most of you will be familiar with the individual-based population viability (PVA) software Vortex that allows relatively easy PVA model building and is particular useful for predicting extinction risk of small populations. What you most likely don’t know exists is what Phil, Bob and JP call Outbreak – an epidemiological modelling software based on the classic susceptible-exposed-infectious-recovered framework. Outbreak is also an individual-based model that can talk directly to Vortex, but only through MMM. Read the rest of this entry »





Mega-meta-model manager

24 07 2010

As Barry Brook just mentioned over at BraveNewClimate.com, I’ll be travelling with him and several of our lab to Chicago tomorrow to work on some new aspects of linked climate, disease, meta-population, demographic and vegetation modelling. Barry has this to say, so I won’t bother re-inventing the wheel:

… working for a week with Dr Robert LacyProf Resit Akcakaya and collaborators, on integrating spatial-demographic ecological models with climate change forecasts, and implementing multi-species projections (with the aim of improving estimates of extinction risk and provide better ranking of management and adaptation options). This work builds on a major research theme at the global ecology lab, and consequently, a whole bunch of my team are going with me — Prof Corey Bradshaw (lab co-director), my postdocs Dr Damien FordhamDr Mike Watts and Dr Thomas Prowse and Corey’s and my ex-postdoc, Dr Clive McMahon. This builds on earlier work that Corey and I had been pursuing, which he described on ConservationBytes last year.

The ‘mega-meta-model manager’ part is a clever piece of control-centre software that integrates these disparate ecological, climate and disease dynamic inputs. Should be some good papers coming out of the work soon.

Of course, I’ll continue to blog over the coming week. I’m not looking forward to the 30-hour travel tomorrow to Chicago, but it should be fun and productive once I get there.

CJA Bradshaw

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine