ChristianKl comments on Too good to be true - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (119)
Only if it's statistically significant. It could be a small enough effect that they don't notice unless they're looking for it (if you're going to publish a finding from either extreme, you're supposed to use a two-tailed test, so they'd presumably want something stronger than p = 0.05), but large enough to keep them from accidentally noticing the opposite effect.
Not all statistical analysis has to be preregistered. If a data has a trend that suggest vaccination might reduce autism I'm sure the researchers would run a test for it.
If the study is underpowered to find a effect in that direction it's also like to be underpowered to find a effect in the other direction.
Can someone with more statistical expertise run a test to see whether the studies are underpowered to pick up effects in either direction?