You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TitaniumDragon comments on The Universal Medical Journal Article Error - Less Wrong Discussion

6 Post author: PhilGoetz 29 April 2014 05:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (189)

You are viewing a single comment's thread. Show more comments above.

Comment author: TitaniumDragon 16 April 2013 06:34:33PM 1 point [-]

This is why you never eyeball data. Humans are terrible at understanding randomness. This is why statistical analysis is so important.

Something that is at 84% is not at 95%, which is a low level of confidence to begin with - it is a nice rule of thumb, but really if you're doing studies like this you want to crank it up even further to deal with problems with publication bias. publish regardless of whether you find an effect or not, and encourage others to do the same.

Publication bias (positive results are much more likely to be reported than negative results) further hurt your ability to draw conclusions.

The reason that the FDA said what they did is that there isn't evidence to suggest that it does anything. If you don't have statistical significance, then you don't really have anything, even if your eyes tell you otherwise.

Comment author: buybuydandavis 17 April 2013 03:03:44AM 0 points [-]

Humans are terrible at understanding randomness.

Some are more terrible than others. A little bit of learning is a dangerous thing. Grown ups eyeball their data and know the limits of standard hypothesis testing.

The reason that the FDA said what they did is that there isn't evidence to suggest that it does anything.

Yeah, evidence that the FDA doesn't accept doesn't exist.

Comment author: TitaniumDragon 17 April 2013 10:03:11AM 3 points [-]

The people who believe that they are grown-ups who can eyeball their data and claim results which fly in the face of statistical rigor are almost invariably the people who are unable to do so. I have seen this time and again, and Dunning-Kruger suggests the same - the least able are very likely to do this based on the idea that they are better able to do it than most, whereas the most able people will look at it and then try to figure out why they're wrong, and consider redoing the study if they feel that there might be a hidden effect which their present data pool is insufficient to note. However, repeating your experiment is always dangerous if you are looking for an outcome (repeating your experiment until you get the result you want is bad practice, especially if you don't adjust things so that you are looking for a level of statistical rigor that is sufficient to compensate for the fact that you're doing it over again), so you have to keep it very carefully in mind and control your experiment and set your expectations accordingly.

Comment author: buybuydandavis 17 April 2013 10:32:02AM 0 points [-]

statistical rigor

The problem we started with was that "statistical rigor" is generally not rigorous. Those employing it don't know what it would mean under the assumptions of the test, and fewer still know that the assumptions make little sense.