Hidden in plain sight; the problems with current DNA evidence

“In short, the technology and science is so disputed, that there is insufficient consensus in the scientific community regarding the admissibility of these LRs.” [1]

“The concluding lesson from the extensive testimony and complex documentary evidence presented in this case is that the specific care required for low-template, low level DNA testing has largely faded into the background as the shortcomings of the technology and need for stringent controls on its use have been glossed over in the rush to embrace the technological advancements.”[2]

These two quotations are from a legal paper and a recent court judgement in the USA relating to DNA evidence that is now the standard in UK courts; evidence obtained using statistics and computer programmes.  If you have received a DNA report then you have probably seen this type of evidence, characterised by words like;

“I have considered two propositions;

  1. [some proposition including the defendant]
  2. [some other proposition]

I have calculated that the evidence is [some number] more likely …”

Some legals in the UK recognise the problems, but may find it hard to unearth a scientist prepared to challenge the current dogma.  Few ‘experts’ appear to be prepared to go to court to challenge this meaningless statistic or the controversial means used to calculate it.  One software programme used in UK courts is provided by a single lab, which developed the programme themselves, and is not used anywhere else in the world.  The other main programme has been subject to continuing criticism and challenges.

The reputation of DNA as a ‘crime solver’ was made decades ago with samples containing relatively large amounts of DNA from visible substances such as blood, semen or saliva.  In the quest to solve more crimes the limits of the techniques have been extended, some would say exceeded, in the new world of invisible traces and complex low level mixtures of DNA from several people; Low Template DNA; frequently, and wrongly, described as ‘touch DNA’ (it is ‘trace DNA’).

It is obvious why those forensic service providers funded by large police contracts may want to ‘deconvolute more DNA profiles’ (as the tag-line for one commercial supplier of such software goes).  But why should it be necessary to keep trying to get more and more information from less and less DNA with the accompanying reduction in the evidential value?  Few scientists are in a position to challenge the doctrinaire tide and tsunami of papers emanating from the commercial developers.

Interpretation of these mixtures is already controversial even before we delve into complex computer programmes which produce a statistic which few understand and has been proven to actually be misunderstood by many.  Somewhat ironically, the Court and the author of the two quotations at the head of this article, although having clearly made a real effort to understand and read a lot about the Likelihood Ratio, both make a well-known error in explaining what it means.  They state;

“The DNA analysis produced a report based on STRmix™ probabilistic genotyping software that Gissantaner was a 7% minor contributor of the DNA on the gun, and that it was at least 49 million times more likely that the DNA was that of Gissantaner and two unrelated, unknown individuals, than that the DNA was that of three unrelated, unknown contributors.”

“A likelihood ratio (LR) compares the probabilities of two different hypotheses that seek to explain a given piece of evidence.”

Neither of these statements may appear wrong to those uninitiated into the Bayesian priesthood.  The experts should have said, that the probability of the evidence was x times more likely if the prosecution’s version of events was true rather than the defendant’s version of events.  The statistic says nothing about what everyone outside of a forensic DNA lab expects it to say; the probability that the DNA came from the defendant.  But then again, even senior DNA scientists have made the same error in court … at least twice;

For mixtures such as this, the statistic that we do is called the likelihood ratio and this is basically looking at two different scenarios and seeing which scenario is more probable.  …

And, again, the statistic is looking to see which one of those scenarios is more probable.[i]

“[W]e do a statistic called “likelihood ratio and run” [sic] using the Forensic Statistical Tool, which is an in-house calculator of the likelihood ratio. What it is doing is basically looking at two scenarios to see which one is more probable or which one is more likely.”[ii]

The list is long and (otherwise) distinguished of those ‘transposing the conditional’.  The error is nevertheless a harmful error as it substitutes a frequently damning inference for what the number really means.  It is therefore highly prejudicial to the defendant.

This is a subtle but nevertheless well known error.  It is akin to confusing the probability that a dog is a four legged animal with the probability that a four legged animal is a dog.  That is the central problem in understanding this statistic.  Courts, lawyers and scientists supposedly trained to avoid this error (called the ‘transposed conditional’), nevertheless routinely and understandably make it.  What the LR does is to take the probability of the evidence (i.e. NOT of the hypothesis or scenario) if some explanation (usually called a hypothesis or proposition) is true, and compares it to the probability of that evidence if a different explanation is true.   It does not measure the probability of the actual proposition, hypothesis or scenario; it ASSUMES the story is true.  As summarised by a forensic statistician –

Last and most important, always talk about the probability of the evidence given the proposition. Never talk about the probability of the proposition given the evidence. Forget this and you commit the prosecutor’s fallacy.[iii]

Although the hypotheses reported by the Prosecution scientists may be different, there are other hypotheses that may also explain the evidence and that is a major problem in attaching any meaning to the bald statistic provided to the court; it does not consider all of the possible explanations for the evidence. That doesn’t appear to inhibit the experts attaching terms arbitrary terms like ‘strong’ or ‘moderately strong’; terms to the statistic which have also been shown to be misunderstood by lay people as well as being a subtle form of the transposed conditional.

The LR is most often confused with the Random Match Probability (e.g. the chances are less than 1 in a billion that the DNA came from someone selected at random from the population) which has been replaced in scientific reports by the Likelihood Ratio.

The advent of testing invisible material from swabbed items has increased the problem of DNA transfer.  The DNA may have nothing to do with the crime.  Ironically, ‘prosecution’ scientists have no difficulty in creating a ‘defence hypothesis’ for the source of the DNA (i.e. it did not come from the suspect) but not when DNA transfer is considered and the defendant has provided no explanation despite the obvious one being he didn’t touch the object.  (“In the absence of an explanation from the suspect I cannot evaluate this evidence further”)

Achieving significant progress against this tide is frustrating and time-consuming with the prosecution relying on a self-appointed ‘forensic community’ of very few scientists (the ‘Statsi’, as they have been termed by one wit). This select group of scientists has already been criticised but they dominate the literature and the forensic bodies, pumping out ‘peer-reviewed’ papers at an unprecedented and unchallengeable rate.  This gives the appearance of an overwhelming body of opinion in favour of the latest gizmo which, surprise, delivers more prosecutions.

However, the Court in Gissantaner states,

“based on the testimony and other voluminous evidence, presented over a year-and-a-half of hearings, briefing and examination by counsel, experts and the Court …

Given the testimony and evidence, it is the Court’s conclusion that STRmix™ does have some general acceptance in the scientific community, particularly with respect to simple mixture or “mainstream” higher quality and quantity DNA. However, the application of probabilistic genotyping software, including STRmix™, to the interpretation of complex mixture low-template, low level DNA in the manner used in this case to present a likelihood ratio in a criminal prosecution, remains controversial …

The concluding lesson from the extensive testimony and complex documentary evidence presented in this case is that the specific care required for low-template, low level DNA testing has largely faded into the background as the shortcomings of the technology and need for stringent controls on its use have been glossed over in the rush to embrace the technological advancements.”

In other words, in the headlong rush for more from less the reliability of the science through proper testing and review has been sidestepped.  The case of Gissantanner was no one-day Q&A session.  This was 18 months of consideration and court-appointed experts subjected to strong judicial interest and scrutiny.  Has a similar investigation occurred in the UK?

The paper by Stiffelman raises rather an interesting point; the LR may undermine the entire foundation of a fundamental legal principle; the assumption of innocence.  Stiffelman argues, and we and others agree, that the LR is essentially meaningless without what is called a ‘prior’.  The prior is the degree of belief or probability of the proposition that the DNA came from the defendant before we know the evidential DNA profile data in the case.

“In order to convert an LR into a probability, a Bayesian analysis has to be applied. … In order for an LR to answer anything meaningful to a jury, the juror must first postulate a prior probability of guilt. Again, as described by the National Research Council, “[t]he likelihood ratio is still one step removed from what a judge or jury truly seeks — an estimate of the probability that a suspect was the source of a crime sample, given the observed profile of the DNA extracted from samples.” …They [LRs] do not express the probable likelihood of the truth of either hypothesis or the probability that the defendant is the source of the DNA.

But there is an even larger and quite distinct problem with this evidence. The opacity of the LRs obscures something important that is happening when they are introduced in a criminal trial. The complicated math and science distracts judges, lawyers, and surely jurors from the essential nature of this evidence — that it expresses the relative probability of two hypotheticals. Bayesian reasoning has to be employed to convert the LRs into a meaningful probability, and, as I discuss supra, this undermines the presumption of innocence and the prosecutor’s burden of proving their case beyond a reasonable doubt. This concern is exacerbated by the sheer power of the DNA moniker, which dwarfs any and all other less purportedly “scientific” evidence.”

We have appeared in courts around the world challenging the Likelihood Ratio and the currently fashionable probabilistic genotyping software, the ‘must-have’ for any ‘on message’ forensic science laboratory wanting more from less.  It has taken some time, but others are beginning to realise that the king has no clothes.  How long until courts and the entire legal profession take the same level of interest in understanding, questioning and challenging the new DNA dogma being presented in courts every day?

[1] Stiffelman, B., (2019) No Longer the Gold Standard: Probabilistic Genotyping is Changing the Nature of DNA Evidence in Criminal Trials. Berkeley Journal Of Criminal Law Vol. 24:1

[2] US v Daniel Gissantaner, Western District Of Michigan, Southern Division.  Case 1:17-cr-130 (2019)

[i] Evidence of Craig O’Connor, Criminalist Level III of OCME in Supreme Court of The State Of New York County Of New York: Criminal Term: Part 32; The People Of The State Of New York, Indictment No, 5073/10 Jury Trial Raymond Rizzo.

[ii] Evidence of Craig O’Connor, Criminalist Level III of OCME in People v Colon, January 20th 2015.

[iii] Ian Evett, Evaluation and Professionalism, 49 Science and Justice 159 (2009).

Professor Allan Jamieson

www.theforensicinstitute.com

 

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email