Title: [SEQ RERUN] Hindsight Devalues Science Tags: sequence_reruns Today's post, Hindsight Devalues Science was originally published on 17 August 2007. A summary (taken from the LW wiki):
Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Hindsight bias, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
That vagueness is deliberate. The point is that Eliezer didn't tell you inside the post which option was correct, because hindsight would then take over and make the results seem predictable. The idea is that then the reader is left with the uncertainty, and the problem of deciding what was the actual true answer. I went through each proposition individually and attempted to determine whether or not that proposition was true. I was correct four out of six times, which is probably a reasonable score.
I think readers should attempt to determine, on their own, their advance predictions (preferably written down, although I admit to skipping that step), before looking up the results of the actual study. The correct results are available, but I would recommend that the true results not be discussed or linked to on less wrong.
The obvious thing to do would be to choose the reversal-parity randomly (by flipping a coin), independently for each claim.