You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, 24-30 March 2014 - Less Wrong Discussion

6 Post author: Metus 25 March 2014 07:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 31 March 2014 04:44:49PM 1 point [-]

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

It is not the same thing and knowing P(hypothesis | data) would be very useful. Unfortunately, it is also very hard to estimate because usually the best you can do is calculate the probability, given the data, of a hypothesis out of a fixed set of hypotheses which you know about and for which you can estimate probabilities. If your understanding of the true data-generation process is not so good (which is very common in real life) your P(hypothesis | data) is going to be pretty bad and what's worse, you have no idea how bad it is.

Comment author: Douglas_Knight 31 March 2014 07:43:18PM *  0 points [-]

Not having a good grasp on the set of all hypotheses does not distinguish bayesians from frequentists and does not seem to me to motivate any difference in their methodologies.

Added: I don't think it has much to do with the original comment, but testing a model without specific competition is called "model checking." It is a common frequentist complaint that bayesians don't do it. I don't think that this is an accurate complaint, but it is true that it is easier to fit it into a frequentist framework than a bayesian framework.

Comment author: Lumifer 31 March 2014 07:48:31PM 0 points [-]

I have said nothing about the differences between bayesians and frequentists. I just pointed out some issues with trying to estimate P(hypothesis | data).