You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, 24-30 March 2014 - Less Wrong Discussion

6 Post author: Metus 25 March 2014 07:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: pianoforte611 30 March 2014 06:57:19PM *  2 points [-]

Am I confused about frequentism?

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

Comment author: Lumifer 31 March 2014 04:44:49PM 1 point [-]

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

It is not the same thing and knowing P(hypothesis | data) would be very useful. Unfortunately, it is also very hard to estimate because usually the best you can do is calculate the probability, given the data, of a hypothesis out of a fixed set of hypotheses which you know about and for which you can estimate probabilities. If your understanding of the true data-generation process is not so good (which is very common in real life) your P(hypothesis | data) is going to be pretty bad and what's worse, you have no idea how bad it is.

Comment author: Douglas_Knight 31 March 2014 07:43:18PM *  0 points [-]

Not having a good grasp on the set of all hypotheses does not distinguish bayesians from frequentists and does not seem to me to motivate any difference in their methodologies.

Added: I don't think it has much to do with the original comment, but testing a model without specific competition is called "model checking." It is a common frequentist complaint that bayesians don't do it. I don't think that this is an accurate complaint, but it is true that it is easier to fit it into a frequentist framework than a bayesian framework.

Comment author: Lumifer 31 March 2014 07:48:31PM 0 points [-]

I have said nothing about the differences between bayesians and frequentists. I just pointed out some issues with trying to estimate P(hypothesis | data).