Brian Tomasik's latest article, 'Quantify with Care', seems to be of sufficient interest to readers of this forum to post a link to it here.  Abstract:

Quantification and metric optimization are powerful tools for reducing suffering, but they have to be used carefully. Many studies can be noisy, and results that seem counterintuitive may indeed be wrong because of sensitivity to experiment conditions, human error, measurement problems, or many other reasons. Sometimes you're looking at the wrong metric, and optimizing a metric blindly can be dangerous. Designing a robust set of metrics is actually a nontrivial undertaking that requires understanding the problem space, and sometimes it's more work than necessary. There can be a tendency to overemphasize statistics at the expense of insight and to use big samples when small ones would do. Finally, think twice about complex approaches that sound cool or impressive when you could instead use a dumb, simple solution.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:41 PM

I don't think there's anything really new here for long-time LWers (we all know Goodhart's law/Lucas critique, Ioannidis-style results, 'extraordinary claims require extraordinary evidence' etc), but some of the points about cost-benefit of statistical precision might be novel to some of us.

I agree with this completely. A lot of the times we simply don't have information beyond broad generalizations or estimates more accurate than within a order of magnitude or something, and yet people still insist on trying to formulate and act on precise quantifications, even though such information doesn't even exist.

this should go without saying, but you should be highly skeptical of any decision based on inestimable ineffables.