"The mathematical mistakes that could be undermining justice"
They failed, though, to convince the jury of the value of the Bayesian approach, and Adams was convicted. He appealed twice unsuccessfully, with an appeal judge eventually ruling that the jury's job was "to evaluate evidence not by means of a formula... but by the joint application of their individual common sense."
But what if common sense runs counter to justice? For David Lucy, a mathematician at Lancaster University in the UK, the Adams judgment indicates a cultural tradition that needs changing. "In some cases, statistical analysis is the only way to evaluate evidence, because intuition can lead to outcomes based upon fallacies," he says.
Norman Fenton, a computer scientist at Queen Mary, University of London, who has worked for defence teams in criminal trials, has just come up with a possible solution. With his colleague Martin Neil, he has developed a system of step-by-step pictures and decision trees to help jurors grasp Bayesian reasoning (bit.ly/1c3tgj). Once a jury has been convinced that the method works, the duo argue, experts should be allowed to apply Bayes's theorem to the facts of the case as a kind of "black box" that calculates how the probability of innocence or guilt changes as each piece of evidence is presented. "You wouldn't question the steps of an electronic calculator, so why here?" Fenton asks.
It is a controversial suggestion. Taken to its logical conclusion, it might see the outcome of a trial balance on a single calculation. Working out Bayesian probabilities with DNA and blood matches is all very well, but quantifying incriminating factors such as appearance and behaviour is more difficult. "Different jurors will interpret different bits of evidence differently. It's not the job of a mathematician to do it for them," says Donnelly.
The linked paper is "Avoiding Probabilistic Reasoning Fallacies in Legal Practice using Bayesian Networks" by Norman Fenton and Martin Neil. The interesting parts, IMO, begin on page 9 where they argue for using the likelihood ratio as the key piece of information for evidence, and not simply raw probabilities; page 17, where a DNA example is worked out; and page 21-25 on the key piece of evidence in the Bellfield trial, no one claiming a lost possession (nearly worthless evidence)
Related reading: Inherited Improbabilities: Transferring the Burden of Proof, on Amanda Knox.
You misunderstand. There was no normative implication intended about explicit formulation. My claim is much weaker than you think (but also abstract enough that it may be difficult to understand how weak it is). I simply assert that Bayesian updating is a mathematical definition of what "inference" means, in the abstract. This does not say anything about the details of how humans process information, and nor does it say anything about how mathematically explicit we "should" be about our reasoning in order for it to be valid. You concede everything you need to in order to agree with me when you write:
In fact, this actually concedes more than necessary -- because it could turn out that these algorithms are only approximately Bayesian, and my claim about Bayesianism as the ideal abstract standard would still hold (as indeed implied by the phrase "approximately Bayesian").
Of course, this does in my view have the implication that it is appropriate for people who understand Bayesian language to use it when discussing their beliefs, especially in the context of a disagreement or other situation where one person's doesn't understand the other's thought process. I suspect this is the real point of controversy here (cf. our previous arguments about using numerical probabilities).
Yes, the reason why I often bring up this point is the danger of spurious exactitude in situations like these. Clearly, if you are able to discuss the... (read more)