NoSuchPlace comments on What should a Bayesian do given probability of proving X vs. of disproving X? - Less Wrong

0 Post author: PhilGoetz 07 June 2014 06:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: NoSuchPlace 07 June 2014 10:18:03PM 4 points [-]

Is it reasonable to assign P(X) = P(willbeproven(X)) / (P(willbeproven(X)) + P(willbedisproven(X))) ?

No I don't think so, consider the following example:

I flip a coin. If it comes up heads I take two green marbles, else I take one red and one green marble. Then I offer to let you see a random marble and I destroy the other one without showing you.

Then, suppose you wish to test whether my coin came up tails. if the marble is red, you have proven the coin came up tails and the chance of tails being disproven is zero, so your expression is 1, but it should be 0.5.

Comment author: PhilGoetz 11 June 2014 01:19:46AM 0 points [-]

Yes, good answer, with a minor correction, since in this case P(coin came up tails) is in fact 1, not 0.5. The problem is that before I look at a marble, it is possible for doing that to prove that the coin came up tails, but not that it came up heads. And yet, the probability I should assign to the proposition that it came up heads is 0.5.