Clippy comments on Error detection bias in research - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
Feynman once talked about this specific issue during a larger speech:
Yes, that's a good point, and I've struggled with these issues a lot. It's related to the concept of an information cascade, and it's why CLIP (in revs after 2007) has mechanisms that force you to trace the source of a belief so that it doesn't "echo" and then amplify without bound.
In the scenario Feynman refers to, CLIP would have made you state the reason for the adjustment toward Millikan's result in subsequent experiments. Subsequent updates would then necessarily discount for this echo, preventing unwarranted re-corroboration of Millikan's value.
CLIP has a harder time with the problem that User:neq1 is referring to, of course, because of the issues that arise when computing probabilities of logical outputs.
CLIP = Clippy Language Interface Protocol, see link.