Well, perhaps a bit too simple. Consider this. You set your confidence level at 95% and start throwing a coin. You observe 100 tails out of 100. You publish a report saying "the coin has tails on both sides at a 95% confidence level" because that's what you chose during design. Then 99 other researchers repeat your experiment with the same coin, arriving at the same 95%-confidence conclusion. But you would expect to see about 5 reports claiming otherwise! The paradox is resolved when somebody comes up with a trick using a mirror to observe both sides of the coin at once, finally concluding that the coin is two-tailed with a 100% confidence.
What was the mistake?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
[Survey Taken Thread]
Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.
I have taken the survey.