JQuinton comments on Open thread, 25-31 August 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (227)
Can someone link to a discussion, or answer a small misconception for me?
We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible. Even though people are more likely to believe it.
However, If I do an experiment, and measure something which is implied by A&B, then I would think "A&B becomes more plausible then A", Because A is more vague then A&B.
But this seems to be a contradiction.
I suppose, to me, adding more details to a story makes the story more plausible if those details imply the evidence. Sin(x) is an analytic function. If I know a complex differentiable function has roots on all multiples of pi, Saying the function is satisfied by Sin is more plausible then saying it's satisfied by some analytic function.
I think...I'm screwing up the semantics, since sin is an analytic function. But this seems to me to be missing the point.
I read a technical explanation of a technical explanation, so I know specific theories are better then vague theories (provided the evidence is specific). I guess I'm asking for clarifications on how this is formally consistent with P(A) > P(A&B).
I'm guessing that the rule P(A & B) < P(A) is for independent variables (though it's actually more accurate to say P(A & B) <= P(A) ). If you have dependent variables, then you use Bayes Theorem to update. P(A & B) is different from P(A | B). P(A & B) <= P(A) is always true, but not so for P(A | B) viz. P(A).
This is probably an incomplete or inadequate explanation, though. I think there was a thread about this a long time ago, but I can't find it. My Google-fu is not that strong.