Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In 2004, The United States government executed Cameron Todd Willingham via lethal injection for the crime of murdering his young children by setting fire to his house.
In 2009, David Grann wrote an extended examination of the evidence in the Willingham case for The New Yorker, which has called into question Willingham's guilt. One of the prosecutors in the Willingham case, John Jackson, wrote a response summarizing the evidence from his current perspective. I am not summarizing the evidence here so as to not give the impression of selectively choosing the evidence.
A prior probability estimate for Willingham's guilt (certainly not a close to optimal prior probability) is the probability that a fire resulting in the fatalities of children was intentionally set. The US Fire Administration puts this probability at 13%. The prior probability could be made more accurate by breaking down that 13% of intentionally set fires into different demographic sets, or looking at correlations with other things such as life insurance data.
My question for Less Wrong: Just how innocent is Cameron Todd Willingham? Intuitively, it seems to me that the evidence for Willingham's innocence is of higher magnitude than the evidence for Amanda Knox's innocence. But the prior probability of Willingham being guilty given his children died in a fire in his home is higher than the probability that Amanda Knox committed murder given that a murder occurred in Knox's house.
Challenge question: What does an idealized form of Bayesian Justice look like? I suspect as a start that it would result in a smaller percentage of defendants being found guilty at trial. This article has some examples of the failures to apply Bayesian statistics in existing justice systems.
Note: The quantitative elements of this post have now been revised significantly.
Followup to: You Be the Jury: Survey on a Current Event
All three of them clearly killed her. The jury clearly believed so as well which strengthens my argument. They spent months examining the case, so the idea that a few minutes of internet research makes [other commenters] certain they're wrong seems laughable
- lordweiner27, commenting on my previous post
Wielding the Sword of Bayes -- or for that matter the Razor of Occam -- requires courage and a certain kind of ruthlessness. You have to be willing to cut your way through vast quantities of noise and focus in like a laser on the signal.
But the tools of rationality are extremely powerful if you know how to use them.
Rationality is not easy for humans. Our brains were optimized to arrive at correct conclusions about the world only insofar as that was a necessary byproduct of being optimized to pass the genetic material that made them on to the next generation. If you've been reading Less Wrong for any significant length of time, you probably know this by now. In fact, around here this is almost a banality -- a cached thought. "We get it," you may be tempted to say. "So stop signaling your tribal allegiance to this website and move on to some new, nontrivial meta-insight."
But this is one of those things that truly do bear repeating, over and over again, almost at every opportunity. You really can't hear it enough. It has consequences, you see. The most important of which is: if you only do what feels epistemically "natural" all the time, you're going to be, well, wrong. And probably not just "sooner or later", either. Chances are, you're going to be wrong quite a lot.
Over this past weekend I listened to an episode of This American Life titled Pro Se. Although the episode is nominally about people defending themselves in court, the first act of the episode was about a man who pretended to act insane in order to get out of a prison sentence for an assault charge. There doesn't appear to be a transcript, so I'll summarize here first.
A man, we'll call him John, was arrested in the late 1990s for assaulting a homeless man. Given that there was plenty of evidence to prove him guilty, he was looking for a way to avoid the likely jail sentence of five to seven years. The other prisoners he was being held with suggested that he plead insanity: he'd be put up at a hospital for several months with hot food and TV and released once they considered him "rehabilitated". So he took bits and pieces about how insane people are supposed to act from movies he had seen and used them to form a case for his own insanity. The court believed him, but rather than sending him to a cushy hospital, they sent him to a maximum security asylum for the criminally insane.
Within a day of arriving, John realized the mistake he had made and sought to find a way out. He tries a variety of techniques: engaging in therapy, not engaging in therapy, dressing like a sane person, acting like a sane person, acting like an incurably insane person, but none of it works. Over a decade later he is still being held.
As the story unravels, we learn that although John makes a convincing case that he faked his way in and is being held unjustly, the psychiatrists at the asylum know that he faked his way in and continue to hold him anyway, though John is not aware of this. The reason: through his long years of documented behavior John has made it clear to the psychiatrists that he is a psychopath/sociopath and is not safe to return to society without therapy. John is aware that this is his diagnosis, but continues to believe himself sane.
Similar to trying to determine if you are anosognosic, how do you determine if you are insane? Some kinds of insanity can be self diagnosed, but in John's case he has lots of evidence (he has access to read all of his own medical records) that he is insane, yet continues to believe himself not to be. To me this seems a level trickier than anosognosis, since there's no physical tests you can make, but perhaps it's only a level of difference significant to people but not to an AI.
Edited to add a footnote: By "sane" I simply mean normative human reasoning: the way you expect, all else being equal, a human to think about things. While the discussion in the comments about how to define sanity might be of some interest, it really gets away from the point of the post unless you want to argue that "sanity" is creating a question here that is best solved by dissolving the question (as at least one commenter does).