komponisto comments on The Cameron Todd Willingham test - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
In both instances, the prosecution case amounts to roughly zero bits of evidence. However, demographics give Willingham a higher prior of guilt than Knox, perhaps by something like an order of magnitude (1 to 4 bits). I am therefore about an order of magnitude more confident in Knox's innocence than Willingham's.
Bayesian jurors (preferably along with Bayesian prosecutors and judges); that's really all it comes down to.
In particular, discussions about the structure of the judicial system are pretty much beside the point, in my view. (The Knox case is not about the Italian justice system, pace just about everyone.) Such systematic rules exist mostly as an attempt at correcting for predictable Bayesian failures on the part of the people involved. In fact, most legal rules of evidence are nothing but crude analogues of a corresponding Bayesian principle. For example, the "presumption of innocence" is a direct counterpart of the Bayesian prohibition against privileging the hypothesis.
There is this notion that Bayesian and legal reasoning are in some kind of constant conflict or tension, and oh-whatever-are-we-to-do as rationalists when judging a criminal case. (See here for a classic example of this kind of hand-wringing.) I would like to dispel this notion. It's really quite simple: "beyond a reasonable doubt" just means P(guilty|evidence) has to be above some threshold, like 99%, or something. In which case, if it's 85%, you don't convict. That's all there is to it. (In particular, away with this nonsense about how P(guilty|evidence) is not the quantity jurors should be interested in; of course it is!)
From our perspective as rationality-advocates, the best means of improving justice is not some systematic reform of legal systems, but rather is simply to raise the sanity waterline of the population in general.
Now that you mention it directly, it's flabbergasting that no one's ever said what percentage level "beyond a reasonable doubt" corresponds to (legal eagles: correct me if I'm wrong). That's a pretty gaping huge deviation from a properly Bayesian legal system right there.
Well, the number could hardly be made explicit, for political reasons ("you mean it's acceptable to have x wrongful convictions per year?? We shouldn't tolerate any at all!").
In any case, let me not be interpreted as arguing that the legal system was designed by people with a deep understanding of Bayesianism. I say only that we, as Bayesians, are not prevented from working rationally within it.
This is the third time on LW that I've seen the percentage of certainty for convictions conflated with the percentage of wrongful convictions (I suspect it's just quick writing or perhaps my overwillingness to see that implication on this particular post). They're not identical.
Suppose we had a quantation standard of 99% certainty and juries were entirely rational actors, understanding of the thin slice 1% is, and given unskewed evidence. The percentage of wrongful convictions would be well under 1% at trial; juries would convict on cases from 99% certainty to c. 100% certainty. The actual percentage of wrongful convictions would depend on the skew of the cases in that range.
Yes, the certainty level provides a bound on the number of wrongful convictions. A 99% certainty requirement means at least 99% certainty, so an error rate of at most 1%.