Douglas_Knight comments on Too good to be true - Less Wrong

24 Post author: PhilGoetz 11 July 2014 08:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (119)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 14 July 2014 10:46:51AM 2 points [-]

According to Wikipedia:

In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a predetermined significance level, often 0.05[3][4] or 0.01.

Comment author: Douglas_Knight 14 July 2014 02:32:00PM 3 points [-]

Quoting authorities without further commentary is a dick thing to do. I am going to spend more words speculating about the intention of the quote than are in the quote, let alone that you bothered to type.

I have no idea what you think is relevant about that passage. It says exactly what I said, except transformed from the effect size scale to the p-value scale. But somehow I doubt that's why you posted it. The most common problem in the comments on this thread is that people confuse false positive rate with false negative rate, so my best guess is that you are making that mistake and thinking the passage supports that error (though I have no idea why you're telling me). Another possibility, slightly more relevant to this subthread, is that you're pointing out that some people use other p-values. But in medicine, they don't. They almost always use 95%, though sometimes 90%.

Comment author: V_V 20 July 2014 03:37:02PM 1 point [-]

My confusion is about "at least" vs. "exactly". See my answer to Cyan.