You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

satt comments on Knowledge value = knowledge quality × domain importance - Less Wrong Discussion

8 Post author: John_Maxwell_IV 16 April 2012 08:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: satt 16 April 2012 10:17:40PM *  4 points [-]

You're ignoring heavily diminishing returns from additional data points.

Although the win (expressed as precision of an effect size estimate) from upping the sample size n probably only goes as about √n, I think that's enough for gwern's quantitative point to go through. An RCT with a sample size of e.g. 400 would still be 10 times better than 4 self-experiments by this metric. (And this is leaving aside gwern's point about methodological quality. RCTs punch above their weight because random assignment allows direct causal inference.)

Comment author: John_Maxwell_IV 16 April 2012 11:27:17PM *  1 point [-]

Although the win (expressed as precision of an effect size estimate) from upping the sample size n probably only goes as about √n

Where is the math for this?

I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.

Edit: more thoughts on why I don't think the Bienaymé formula is too relevant here; see also.

Comment author: steven0461 17 April 2012 12:00:10AM 3 points [-]

http://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_.28Bienaym.C3.A9_formula.29

(Of course, any systematic bias stays the same no matter how big you make the sample.)

Comment author: satt 17 April 2012 09:32:26PM 1 point [-]

Where is the math for this?

What steven0461 said. Square rooting both sides of the Bienaymé formula gives the standard deviation of the mean going as 1/√n. Taking precision as the reciprocal of that "standard error" then gives a √n dependence.

I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.

This is true, but we're also often wrong, and for small-to-medium effects it's often tough to say when we're right and when we're wrong without a technique that severs all possible links between confounders and outcome.