Behind the paper: evaluating the sensitivity and specificity of DNA tests for wildlife

If you open a few recent journals with a focus on conservation or ecology, the chances are that you won’t be too far away from a paper that uses environmental DNA (eDNA). In fact, Biological Conservation currently has a special issue on environmental DNA as a tool for conservation, which I’ve added to my reading list for #365papers.

Genetics has been a handy tool for conservationists for a few decades now, and since the 1990s non-invasive genetic analyses, have become increasingly common. More recently, the revolution in high throughput and massively parallel DNA sequencing technologies has allowed us to tackle questions that were thought impossible even just a few years ago, including complex investigations of mixed environmental samples. You can measure biodiversity from the DNA in a soil or water sample. You can study the diets of cryptic predators through genetic analysis of their faeces. It’s all very exciting! As I’ve mentioned before, this methodological revolution has also made genetic methods available to a much broader cross-section of the scientific community, including many ecologists who don’t have much theoretical background or technical experience in the field. Now don’t get me wrong, I think this is a great thing, but as with any scientific endeavour, if you’re going to use a new method it is important to understand the limitations of that method, and how your approach might influence your outcomes.

WildlifeSNPits has covered eDNA before and in that post Schyler Nunziata raised some concerns about false positive and false negative results in eDNA studies. Contamination is one possible source of false positive identifications. I discussed some potential sources of DNA contamination in a post last year and pointed out that contamination can occur anywhere along the chain of analysis, from sample collection and storage to DNA extraction, PCR and sequencing. I also outlined some of the measures we take in the lab to minimise the risk of contamination. Unfortunately, it is likely that no matter how much effort we put into developing new methods, we will never completely eliminate false negative or false positive results from every DNA test. So what can we do about that? Well, if the results of a test will be used to inform wildlife management decisions, it is important to understand the specificity and sensitivity of the test, as it is implemented in field conditions. We need to understand the likelihood of a false negative (low sensitivity) or a false positive (low specificity) and the management and / or financial costs associated with each.

Rocky Cape in Tasmania, foxes are not wanted here!
Rocky Cape in Tasmania, foxes are not wanted here!

For some years I’ve been part of a project using scat DNA analysis to detect invasive foxes in Australia, especially in Tasmania, where there has been a recent incursion (which hopefully will not become an established population as this could be a disaster for conservation and agriculture). In a multi-year, landscape-scale, strategic survey, we have analysed over 10,000 mammalian predator scats (which could come from Tasmanian devils, eastern quolls, spotted-tailed quolls, cats, dogs or foxes), using a PCR and DNA sequencing test to identify those that contain fox DNA. To date we’ve detected fox DNA from 61 scats and, in a related project, we are now working to identify predator and prey DNA from a large proportion of the remaining scats that tested negative for fox. We’ve also put considerable effort into validating this fox DNA test as it has been implemented, not just in the lab, but from field collection of samples through to laboratory analysis. One of the outcomes of this validation work is our new paper (Ramsey et al. 2015) just published online in the Journal of Applied Ecology. Using a case-control approach, we tested over 500 scats of known origin, from captive cats, dogs, devils, quolls and foxes, to determine the risks of false positive and false negative results. This was done in a blind trial: the laboratory staff did not even know they were being tested until after the fact. Scats from captive animals were collected using normal field methods, packaged for storage and assigned unique identifiers in line with standard practice, then “smuggled” into the normal analysis pipeline where they were dried, transported to the lab and analysed.

When we use DNA to study an old scat like this, how confident can we be in our species ID?
When we use DNA to study an old scat like this, how confident can we be in our species ID?

So what did we find? Well, first, I’m very happy to report that we did not detect a single false positive result from the known non-fox scats! Of course, it is very difficult to prove that something will never happen, but this work shows that there is a very low probability of false positives using this method. Our results also demonstrate the importance of the DNA sequencing step: we don’t rely on PCR amplification alone, but sequence every PCR product to confirm sequence identity before we describe a sample as “positive for fox DNA”. Even the best species-specific PCR primers sometimes amplify DNA from non-target species, and the fox-specific primers we use are no exception. We know that they sometimes amplify DNA from rabbits and hares despite multiple sequence mismatches. After some nifty modelling by our colleague David Ramsey, we can now put numbers on the specificity of the fox test: using the PCR and DNA sequencing steps together, the specificity is 99.6%. However, if we just look at the PCR results alone, without DNA sequencing, the specificity is only 96%. This is because rabbit and hare DNA were amplified from a small number of the cat and devil scats analysed. If we just relied upon the PCR results we would have a much higher risk of false positives for this test, because we would have erroneously scored those cat and devil samples as fox-positive.

This high specificity is a good result, but unfortunately it comes with a trade-off in terms of sensitivity. The measures we take to reduce the number of false positives have the effect of increasing the number of false negative results. Using the PCR step alone, the sensitivity of the test is 94%. It is not surprising that a few fox positive scats were missed, because sometimes scat DNA is too degraded, or contains too many inhibitors, for PCR to be successful. However, when we include the DNA sequencing step as well, the sensitivity of the test decreases to 84%. Sadly we just can’t get a good, diagnostic DNA sequence from all of the scats from which we can amplify DNA.

So what does this all mean in real life? Well, if we mistakenly identify a sample as fox-positive, this might lead to a lot of money being spent by wildlife management agencies to eradicate foxes from an area they were never present in, wasting both resources and public goodwill. Conversely, if we fail to detect a genuine fox scat, we potentially allow foxes to become better established and increase the risk to threatened species. Based on our recent results, the latter scenario seems more likely. But we are not the people who make the decisions about how and where to allocate resources. Importantly, by validating the DNA tests we use, we are able to provide the decision-makers with information about the likelihood of mistaken identifications, which they can then use to assess the risks associated with different management strategies.


2 Comments Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s