# FAWS comments on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom - Less Wrong

42 13 December 2009 04:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 22 February 2010 12:28:48AM *  2 points [-]

Oh, let me play!

(When you made your first post on this issue I found trying to look for unbiased information a terribly frustrating experience so I didn't look for more than 20 minutes, and haven't done any reading on it since, except for a cursory look the the wikipedia page just now. A list of all points that are agreed on by both sides (with sub-points arguing about the relevance of the point from both perspectives, perhaps) would have been very welcome)

Current posterior: not really sure, let's see what I end up with below, but as a starting point:

0.01<P(S) < P(K) <0.2 < 0.8< P (G) < 0.99

Priors for commiting a homicide in a specific month:

P(K)= 4.7 *10^-6 (US homicide rate, assuming being female and a young adult roughly cancel out)

P(S)= 4 *10^-6 (Italian homicide rate assuming young adult males are 4 times as likely to commit murder as average)

P(G)=1*10^-5 (had been implicated in a break-in)

An inhabitant of the top floor of that apartment being murdered in her room in this same specific month (R):

P(R|K)=0.1, P(R|~K)=4.5*10^-6

P(R|S)=0.04, P(R|~S)=4.95*10^-6

P(R|G)=0.0005, P(R|~G)=5*10^-6

Guede's DNA being found all over and inside the victim of a homicide in this same specific month (D_G):

P(DG|K)=1*10^-4, P(DG|~K)=6*10^-7

P(DG|S)=5*10^-5, P(DG|~S)=6*10^-7

P(DG|G)=0.6, P(DG|~G)=1*10^-7

I can't think of any other pieces of evidence that I can treat as effectively independent. Given R and treating the probability of any murders except R as effectively 0: Knox' DNA not being found on the victim (D_K):

P(DK|K)=0.5 P(DK|~K)=0.8

P(DK|S)==0.85 P(DK|~S)=0.81

P(DK|G)=0.82 P(DK|~G)=0.82

Sollecito's DNA being found on bra clasp of the the victim, but nowhere else (D_S):

P(DS|K)=0.0002 P(DS|~K)=0.00006

P(DS|S)==0.001 P(DS|~S)=0.00005

P(DS|G)=0.00007 P(DS|~G)=0.00007

Minimal trances of R's DNA found on the blade of one of the knifes in Sollecito's kitchen possibly matching one of three wounds, along with Knox' DNA on the handle (D_R):

P(DR|K)=5*10^-6 P(DR|~K)=1*10^-6

P(DR|S)=1*10^-5 P(DR|~S)=1*10^-6

P(DR|G)=1*10^-6 P(DR|~G)=1*10^-6

Trying to calculate the probabilities based on those estimates I find that I shouldn't have treated DG and R as independent either, I get a stupidly high result of 0.9999983 for P(G), and the indirect association with Guede shouldn't make the other two much more likely given that they are already more directly associated with the victim, if anything DG should make them less likely. The background probability for R should be higher because I forgot to properly account for the fact that there was more than 1 possible victim. Since Guede's DNA and being a room mate alone would already be enough to make Knox almost certainly guilty based on the numbers above and this makes no sense whatsoever I think we can safely say I thoroughly failed this test. I guess the lesson is that making up plausible numbers for various conditional probabilities well outside your intuitive range and then applying them in a basian update doesn't improve your calibration if you have no experience at it at all.

Comment author: 22 February 2010 02:15:28AM 3 points [-]

When you made your first post on this issue I found trying to look for unbiased information a terribly frustrating experience

One of the lessons of this exercise, that may be worth stating explicitly, is that there's no "outside referee" you can look to to make sure your beliefs are correct. In real life, you have to make judgments under uncertainty, using whatever evidence you have.

It's not as hard as you (and others) think. Yes, of course, the sources are "biased" in the sense that they have an incentive to mislead if they can get away with it. But what they say is not literally all the information you have. You also have background knowledge about how the world works. Priors matter. If A says X and B says ~X, and there's no a priori reason to trust one over the other, that doesn't mean you're stuck! It depends on how plausible X is in the first place.

I guess the lesson is that making up plausible numbers for various conditional probabilities well outside your intuitive range and then applying them in a basian update doesn't improve your calibration if you have no experience at it at all.

Here's the real lesson: Bayesian calculations are not some mysterious black-magic technique that you "apply" to a problem. They are supposed to represent the calculations your brain has already made. Probability theory is the mathematics of inference. If you have an opinion on this case, then, ipso facto, your brain has already performed a Bayesian update.

The mistake you made was not making up numbers; it was making up numbers that, as you point out in the end, didn't reflect your actual beliefs.

Comment author: 22 February 2010 03:11:44AM *  2 points [-]

One of the lessons of this exercise, that may be worth stating explicitly, is that there's no "outside referee" you can look to to make sure your beliefs are correct. In real life, you have to make judgments under uncertainty, using whatever evidence you have.

I meant that questions that should have an easily determinable answer, like "Did someone clean the blood outside her room up before the police was called" were unreasonably difficult to settle. Every site was mixing arguments and conclusions with facts. Sure, it's possible to find the answers if you look long enough, but it's much more work than it should be, and more work than I was willing to invest for a qestion that didn't interest me all that much in the first place.

Here's the real lesson: Bayesian calculations are not some mysterious black-magic technique that you "apply" to a problem. They are supposed to represent the calculations your brain has already made. Probability theory is the mathematics of inference. If you have an opinion on this case, then, ipso facto, your brain has already performed a Bayesian update.

The brain doesn't operate with very small or very big numbers, though. And I doubt it operates with conditional probabilities of the sort used above, as far as it operates Bayesian at all I would guess it's more similar to using venn diagrams.

The mistake you made was not making up numbers; it was making up numbers that, as you point out in the end, didn't reflect your actual beliefs.

The point is that I didn't spot that until after I did the calculation, and while I don't usually do much in the way of statistics I intuitively got the simple Bayesian problems like the usual example with false positives in a medical test right before hearing about Bayes theorem for the first time, so I don't think it's because I'm particularly bad at this. If you need to tweak afterwards anyway doing the Bayesian update explicitly isn't very useful as self-control.