You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on What should a Bayesian do given probability of proving X vs. of disproving X? - Less Wrong Discussion

0 Post author: PhilGoetz 07 June 2014 06:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 11 June 2014 02:27:01AM *  0 points [-]

If they are mutually exclusive but not exhaustive, and you have no other information besides P(X|proven)=1, P(X|disproven)=0, (usually not true, but useful for toy cases), then we have P(X)=P(proven)+1/2*(1-P(disproven) -P(proven)).

This is only equivalent to P(X) = P(proven) / (P(proven) + P(disproven)) if P(proven)+P(disproven)=1.

Basically what's going on is that if you can neither prove nor disprove X, and you don't have any information about what to think when there's neither proof nor disproof, your estimate just goes to 1/2.

Appendix: The equations we want in this case come from P(X)= P(X|A)*P(A)+P(X|B)*P(B)+P(X|C)*P(C), for A, B, C mutually exclusive and exhaustive. So for example if "proven" and "disproven" are mutually exclusive but not exhaustive, then we can add some extra category "neither." We take as starting information that P(X|proven)=1 and P(X|disproven)=0, and in the lack of other information P(X|neither)=1/2. So P(X)=P(X|proven)*P(proven)+P(X|disproven)*P(disproven)+P(X|neither)*P(neither).