Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The following will explore a couple of areas in which I feel that the criminal justice system of many Western countries might be deficient, from the standpoint of rationality. I am very much interested to know your thoughts on these and other questions of the law, as far as they relate to rational considerations.
Moral luck refers to the phenomenon in which behaviour by an agent is adjudged differently based on factors outside the agent's control.
Suppose that Alice and Yelena, on opposite ends of town, drive home drunk from the bar, and both dazedly speed through a red light, unaware of their surroundings. Yelena gets through nonetheless, but Alice hits a young pedestrian, killing him instantly. Alice is liable to be tried for manslaughter or some similar charge; Yelena, if she is caught, will only receive the drunk driving charge and lose her license.
Raymond, a day after finding out that his ex is now in a relationship with Pardip, accosts Pardip at his home and attempts to stab him in the chest; Pardip smashes a piece of crockery over Raymond's head, knocking him unconscious. Raymond is convicted of attempted murder, receiving typically 3-5 years chez nous (in Canada). If he had succeeded, he would have received a life sentence, with parole in 10-25 years.
Why should Alice be punished by the law and demonized by the public so much more than Yelena, when their actions were identical, differing only by the sheerest accident? Why should Raymond receive a lighter sentence for being an unsuccessful murderer?
Some prima facie plausible justifications:
- Identical behaviour is hard to judge - perhaps Yelena was really keeping a better eye on the road than Alice; perhaps Raymond would have performed a non-fatal stabbing.
- The law needs to crack down harder when there are actual victims, in order to provide the victims and families a sense of justice done.
- This could result in far too many serious, high-level trials.
Trial by Jury; Trial by Judge
Those of us who like classic films may remember 12 Angry Men (1957) with Henry Fonda. This was a remarkably good film about a jury deliberating on the murder trial of a poor young man from a bad neighbourhood, accused of killing his father. It portrays the indifference (one juror wants to be out in time for the baseball game), prejudice and conformity of many of the jurors, and how this is overcome by one man of integrity who decides to insist on a thorough look through the evidence and testimony.
I do not wish to generalize from fictional examples; however, such factors are manifestly at play in real trials, in which Henry Fonda cannot necessarily be relied upon to save the day.
Komponisto has written on the Knox case, in which an Italian jury came to a very questionable (to put it mildly) conclusion based on the evidence presented to them; other examples will doubtless spring to mind (a famous one in this neck of the woods is the Stephen Truscott case - the evidence against Truscott being entirely circumstantial.
More information on trial by jury and its limitations may be found here. Recently the UK has made some moves to trial by judge for certain cases, specifically fraud cases in which jury tampering is a problem.
The justifications cited for trial by jury typically include the egalitarian nature of the practice, in which it can be guaranteed that those making final legal decisions do not form a special class over and above the ordinary citizens whose lives they effect.
A heartening example of this was mentioned in Thomas Levenson's fascinating book Newton and the Counterfeiter. Being sent to Newgate gaol was, infamously in the 17th and 18th centuries, an effective death sentence in and of itself; moreover, a surprisingly large number of crimes at this time were capital crimes (the counterfeiter whom Newton eventually convicted was hanged). In this climate of harsh punishment, juries typically only returned guilty verdicts either when evidence was extremely convincing or when the crime was especially heinous. Effectively, they counteracted the harshness of the legal system by upping the burden of proof for relatively minor crimes.
So juries sometimes provide a safeguard against abuse of justice by elites. However, is this price for democratizing justice too high, given the ease with which citizens naive about the Dark Arts may be manipulated? (Of course, judges are by no means perfect Bayesians either; however, I would expect them to be significantly less gullible.)
Are there any other systems that might be tried, besides these canonical two? What about the question of representation? Does the adversarial system, in which two sides are represented by advocates charged with defending their interests, conduce well to truth and justice, or is there a better alternative? For any alternatives you might consider: are they naive or savvy about human nature? What is the normative role of punishment, exactly?
How would the justice system look if LessWrong had to rewrite it from scratch?
Over this past weekend I listened to an episode of This American Life titled Pro Se. Although the episode is nominally about people defending themselves in court, the first act of the episode was about a man who pretended to act insane in order to get out of a prison sentence for an assault charge. There doesn't appear to be a transcript, so I'll summarize here first.
A man, we'll call him John, was arrested in the late 1990s for assaulting a homeless man. Given that there was plenty of evidence to prove him guilty, he was looking for a way to avoid the likely jail sentence of five to seven years. The other prisoners he was being held with suggested that he plead insanity: he'd be put up at a hospital for several months with hot food and TV and released once they considered him "rehabilitated". So he took bits and pieces about how insane people are supposed to act from movies he had seen and used them to form a case for his own insanity. The court believed him, but rather than sending him to a cushy hospital, they sent him to a maximum security asylum for the criminally insane.
Within a day of arriving, John realized the mistake he had made and sought to find a way out. He tries a variety of techniques: engaging in therapy, not engaging in therapy, dressing like a sane person, acting like a sane person, acting like an incurably insane person, but none of it works. Over a decade later he is still being held.
As the story unravels, we learn that although John makes a convincing case that he faked his way in and is being held unjustly, the psychiatrists at the asylum know that he faked his way in and continue to hold him anyway, though John is not aware of this. The reason: through his long years of documented behavior John has made it clear to the psychiatrists that he is a psychopath/sociopath and is not safe to return to society without therapy. John is aware that this is his diagnosis, but continues to believe himself sane.
Similar to trying to determine if you are anosognosic, how do you determine if you are insane? Some kinds of insanity can be self diagnosed, but in John's case he has lots of evidence (he has access to read all of his own medical records) that he is insane, yet continues to believe himself not to be. To me this seems a level trickier than anosognosis, since there's no physical tests you can make, but perhaps it's only a level of difference significant to people but not to an AI.
Edited to add a footnote: By "sane" I simply mean normative human reasoning: the way you expect, all else being equal, a human to think about things. While the discussion in the comments about how to define sanity might be of some interest, it really gets away from the point of the post unless you want to argue that "sanity" is creating a question here that is best solved by dissolving the question (as at least one commenter does).
(Inspired by a recent conversation with Robin Hanson.)
Robin Hanson, in his essay on "Minimal Morality", suggests that the unreliability of our moral reasoning should lead us to seek simple moral principles:
"In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data. Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions. (This paper elaborates.)"
In "the limit of expecting very large errors of our moral intuitions", says Robin, we should follow an extremely simple principle - the simplest principle we can find that seems to compress as much morality as possible. And that principle, says Robin, is that it is usually good for people to get what they want, if no one else objects.
Now I myself carry on something of a crusade against trying to compress morality down to One Great Moral Principle. I have developed at some length the thesis that human values are, in actual fact, complex, but that numerous biases lead us to underestimate and overlook this complexity. From a Friendly AI perspective, the word "want" in the English sentence above is a magical category.
But Robin wasn't making an argument in Friendly AI, but in human ethics: he's proposing that, in the presence of probable errors in moral reasoning, we should look for principles that seem simple to us, to carry out at the end of the day. The more we distrust ourselves, the simpler the principles.
This argument from fitting noisy data, is a kind of logic that can apply even when you have prior reason to believe the underlying generator is in fact complicated. You'll still get better predictions from the simpler model, because it's less sensitive to noise.
Even so, my belief that human values are in fact complicated, leads me to two objections and an alternative proposal:
Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and mess around with Wulky's teenage daughter, I'd call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner's statement must have been strong Bayesian evidence.
Our legal system will not imprison Wulky on the basis of the police commissioner's statement. It is not admissible as legal evidence. Maybe if you locked up every person accused of being a crime boss by a police commissioner, you'd initially catch a lot of crime bosses, plus some people that a police commissioner didn't like. Power tends to corrupt: over time, you'd catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity) and more and more innocent victims (unrestrained power attracts corruption like honey attracts flies).
This does not mean that the police commissioner's statement is not rational evidence. It still has a lopsided likelihood ratio, and you'd still be a fool to mess with Wulky's teenager daughter. But on a social level, in pursuit of a social goal, we deliberately define "legal evidence" to include only particular kinds of evidence, such as the police commissioner's own observations on the night of April 4th. All legal evidence should ideally be rational evidence, but not the other way around. We impose special, strong, additional standards before we anoint rational evidence as "legal evidence".
As I write this sentence at 8:33pm, Pacific time, on August 18th 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone's authority. Science is the publicly reproducible knowledge of humankind.
Like a court system, science as a social process is made up of fallible humans. We want a protected pool of beliefs that are especially reliable. And we want social rules that encourage the generation of such knowledge. So we impose special, strong, additional standards before we canonize rational knowledge as "scientific knowledge", adding it to the protected belief pool.