DanielLC comments on Log-odds (or logits) - Less Wrong

20 Post author: brilee 28 November 2011 01:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread.

Comment author: DanielLC 28 November 2011 05:05:26AM 0 points [-]

It seems to me that this doesn't have any real advantage over odds ratios. If I want to do a Bayesian update, I multiply the odds by the relative likelihood. In the example in the article (1/10,000 chance of having the disease, 3% false positive, and 1% false negative), You just take 1:9999 and multiply it by 0.99/0.03 = 33:1 for each successful test. Then you have 33:9999 = 1:303, then 33:303 = 11:101, and finally 363:101 for the final test. Then to change back, you just take 363/(363+101) = 78.23%. The calculations are slower (two multiplications vs. one addition), but it's much easier and more intuitive to convert between them and traditional probabilities.

Comment author: brilee 28 November 2011 05:55:10AM 8 points [-]

What you've described is in fact, exactly the same thing as log-odds - they're simply separated by a logarithm/exponentiation. Thus, all the multiplications you describe are the counterpart of the additions I describe. I agree, we could work with odds ratio, without taking the logarithm - but using logarithms has the benefit of linearizing the probability space. The distance between 1 L% and 5 L% is the same as the distance between 10 L% and 14 L%, but you wouldn't know it by looking at 2.72:1 and 150:1 versus 22,000:1 and 1,200,000:1.