John_Maxwell_IV comments on Knowledge value = knowledge quality × domain importance - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
The quality of a belief is not linear in the number of participants in the study supporting it.
You're ignoring heavily diminishing returns from additional data points. In other words, to persuade me that studies with many participants really are a lot better, you'd have to do some math and show me that if I randomly sampled just a few study participants and inferred based on their results only, my inferences would frequently be wrong.
This seems pretty clearly not the case (see analysis in my reply to this comment).
Additionally, in domains like negotiation, I'd guess that decent-quality knowledge of many facts is more valuable than high-quality knowledge of just a few. Studies are a good way to get high-quality knowledge regarding a few facts, but not decent-quality knowledge regarding many. (Per unit effort.)
Testing something a bunch of times doesn't make it the thing you most need tested. (And some things may be hard to test cleanly.)
Although the win (expressed as precision of an effect size estimate) from upping the sample size n probably only goes as about √n, I think that's enough for gwern's quantitative point to go through. An RCT with a sample size of e.g. 400 would still be 10 times better than 4 self-experiments by this metric. (And this is leaving aside gwern's point about methodological quality. RCTs punch above their weight because random assignment allows direct causal inference.)
Where is the math for this?
I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.
Edit: more thoughts on why I don't think the Bienaymé formula is too relevant here; see also.
http://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_.28Bienaym.C3.A9_formula.29
(Of course, any systematic bias stays the same no matter how big you make the sample.)
What steven0461 said. Square rooting both sides of the Bienaymé formula gives the standard deviation of the mean going as 1/√n. Taking precision as the reciprocal of that "standard error" then gives a √n dependence.
This is true, but we're also often wrong, and for small-to-medium effects it's often tough to say when we're right and when we're wrong without a technique that severs all possible links between confounders and outcome.