You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Oscar_Cunningham comments on Open thread, 24-30 March 2014 - Less Wrong Discussion

6 Post author: Metus 25 March 2014 07:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: pianoforte611 30 March 2014 06:57:19PM *  2 points [-]

Am I confused about frequentism?

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

Comment author: Oscar_Cunningham 31 March 2014 08:37:36AM 1 point [-]

Your conclusion

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

is correct. Frequentists do indeed claim that P(hypothesis | data) is meaningless for exactly the reasons you gave. However there are some little details in the rest of your post that are incorrect.

null hypothesis (which is the complement of the hypothesis that you are trying to test).

The hypothesis you are trying to test is typically not the complement of the null hypothesis. For example we could have:

H0: theta=0

H1:theta>0

where theta is some variable that we care about. Note that the region theta<0 isn't in either hypothesis. If we were instead testing

H1':theta isn't equal to 0

then frequentists would suggest a different test. They would use a one-tailed test to test H1 and a two-tailed test to test H1'. See here.

P(data | hypothesis) = 1 - p-value

No. This is just mathematically wrong. P(A|B) is not necessarily equal to 1-P(A|¬B). Just think about it for a bit and you'll see why. If that doesn't work, take A="sky is blue" and B="my car is red" and note that P(A|B)=P(A|¬B)~1.