All of Elver's Comments + Replies

Elver160

This needs to be turned into a short film. Now!

Elver30

Something popped into my mind while I was reading about the example in the very beginning. What about research that goes out to prove one thing, but discovers something else?

Group of scientists want to see if there's a link between the consumption of Coca-Cola and stomach cancer. They put together a huge questionnaire full of dozens of questions and have 1000 people fill it out. Looking at the data they discover that there is no correlation between Coca-Cola drinking and stomach cancer, but there is a correlation between excessive sneezing and having large... (read more)

1alex_zag_al
I have no idea about what's done in actual statistical practice, but it seems to make sense to do this: Publish the likelihood ratio for each correlation. The likelihood ratio for the correlation being real and replicable will be very high. Since they bothered to do the test, you can figure that people in the know have decently sized prior odds for the association being real and replicable. There must have been animal studies or a biochemical argument or something. Consequently, a high likelihood ratio for this hypothesis may been enough to convinced them - that is, when it's multiplied with the prior, the resulting posterior may have been high enough to represent the "I'm convinced" state of knowledge. But the prior odds for the correlation being real and replicable are the same tiny prior odds you would have for any equally unsupported correlation. When they combine the likelihood ratio with their prior odds they do end up with a much higher posterior odds for than they do for other arbitrary-seeming correlations. But, still insignificant. The critical thing that distinguishes the two hypotheses is whatever previous evidence led them to attempt the test; that's why the prior for the association is higher. It's subjective only in the sense that it depends on what you've already seen - it doesn't depend on your thoughts. Whereas, in what Kindly says is the standard solution, you apply a different test depending upon what the researcher's intentions were. (I have no idea how you would calculate the prior odds. I mean, Solomonoff induction with your previous observations is the Carnot engine for doing it, but I have no idea how you would actually do it in practice)
2Baruta07
Before they publish anything (other than a article on Coca-Cola not being related to stomach cancer) they should first use a different test group in order to determine that the first result wasn't a sampling fluke or otherwise biased, (Perhaps sneezing wasn't causing large ears after all, or large ears were correlated to something that also caused sneezing.) What brought the probability to your attention in the first place shouldn't be what proves it. If A then B is a separate experiment than If C then D and should require separate additional proof.
Elver150

This post is unusually white. The two arguments -- all shades of gray being seen as the same shade and science being a demonstrably better "religion" -- have seriously expanded my mind. Thank you!

Elver60

Maybe they're asking so nervously because they were planning to set up a cult around the very same idea?

The Church of Frozen Heads. Come worship the meat popsicle.