You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on Knowledge value = knowledge quality × domain importance - Less Wrong Discussion

8 Post author: John_Maxwell_IV 16 April 2012 08:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 16 April 2012 11:27:17PM *  1 point [-]

Although the win (expressed as precision of an effect size estimate) from upping the sample size n probably only goes as about √n

Where is the math for this?

I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.

Edit: more thoughts on why I don't think the Bienaymé formula is too relevant here; see also.

Comment author: steven0461 17 April 2012 12:00:10AM 3 points [-]

http://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_.28Bienaym.C3.A9_formula.29

(Of course, any systematic bias stays the same no matter how big you make the sample.)

Comment author: satt 17 April 2012 09:32:26PM 1 point [-]

Where is the math for this?

What steven0461 said. Square rooting both sides of the Bienaymé formula gives the standard deviation of the mean going as 1/√n. Taking precision as the reciprocal of that "standard error" then gives a √n dependence.

I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.

This is true, but we're also often wrong, and for small-to-medium effects it's often tough to say when we're right and when we're wrong without a technique that severs all possible links between confounders and outcome.