In 2004, The United States government executed Cameron Todd Willingham via lethal injection for the crime of murdering his young children by setting fire to his house.
In 2009, David Grann wrote an extended examination of the evidence in the Willingham case for The New Yorker, which has called into question Willingham's guilt. One of the prosecutors in the Willingham case, John Jackson, wrote a response summarizing the evidence from his current perspective. I am not summarizing the evidence here so as to not give the impression of selectively choosing the evidence.
A prior probability estimate for Willingham's guilt (certainly not a close to optimal prior probability) is the probability that a fire resulting in the fatalities of children was intentionally set. The US Fire Administration puts this probability at 13%. The prior probability could be made more accurate by breaking down that 13% of intentionally set fires into different demographic sets, or looking at correlations with other things such as life insurance data.
My question for Less Wrong: Just how innocent is Cameron Todd Willingham? Intuitively, it seems to me that the evidence for Willingham's innocence is of higher magnitude than the evidence for Amanda Knox's innocence. But the prior probability of Willingham being guilty given his children died in a fire in his home is higher than the probability that Amanda Knox committed murder given that a murder occurred in Knox's house.
Challenge question: What does an idealized form of Bayesian Justice look like? I suspect as a start that it would result in a smaller percentage of defendants being found guilty at trial. This article has some examples of the failures to apply Bayesian statistics in existing justice systems.
I think something like an adversarial system with a jury of peers would still be a good choice with perfect Bayesian agents. This structure exists largely to avoid conflicts of interest, not merely to compensate for human irrationality. Unless you are assuming that perfect Bayesian agents would not have conflicts of interest (and I don't see any reason to suppose that) then you would still want to maintain those aspects of the legal system that are designed to avoid such conflicts.
There are two common classes of reasons why evidence may be inadmissible or excludable in a trial. One of these classes should probably be admissible with a perfect Bayesian jury, the other not.
Evidence that is inadmissible because it would prejudice or mislead the jury (like information about prior convictions) would probably be fine with a jury of perfect Bayesians but evidence that is thrown out because it was obtained in some way that society deems unacceptable might still be rejected because of broader concerns about creating inappropriate incentives for law enforcement.
This raises the question of just how perfect your Bayesians are however. If they are very good at correctly weighing relevant evidence but still have computational limits these concerns would probably apply. If they are some kind of idealized agents with infinite computational capacity then you might draw different conclusions but as this case is impossible it is not very interesting in my opinion.
So obviously if everyone was a perfect Bayesian agent a jury would be fine. I actually think imagining how things would work with perfect Bayesians is boring. I was thinking more realistically, as in, how would I design a new justice system tomorrow (or in 5 years when we're really good at this) to minimize irrationality as much as possible. No perfect Bayesians just smart people you can teach things to.
The advantage of judges over juries is that we could teach judges to be rationalists as part of their job. Also, the adversarial system strikes me as bias ... (read more)