ChrisHibbert comments on Time to See If We Can Apply Anything We Have Learned - Less Wrong

1 Post author: MichaelVassar 18 June 2009 10:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 18 June 2009 04:45:46PM 7 points [-]

I don't think it's second-order good epistemology trying and succeeding to counter bad epistemology.

Let's say we run a study with 30 people, and we conclude ZM's method is the best, with p = .55 (sorry, I don't think in Bayesian when I have my psychology experimentation cap on), which is realistic for that kind of sample and the variability we can expect. Now what?

We could come up with some kind of hokey prior, like that there's a 33% chance each of our techniques is best, then apply that and end up with maybe a 38% chance ZM's is best and a 31% chance mine and Pjeby's are best (no, I didn't actually do the math there). But first of all, that prior is hokey. Pjeby's a professional anti-procrastination expert, and we're giving him the same prior as me and Z.M. Davis? Second of all, we still don't really know what "best" means, and it's entirely possible different methods are best for different people in complex ways. Third, I don't trust anyone including myself to know what to do with a 7% chance. I like my method better; should I give that up just because a very small study ended up shifting the probabilities 7% toward ZM? Fourth of all, we still wouldn't know how to apply this to picoeconomics as a theory: using any technique will increase success by placebo effect alone, we have several techniques that all use picoeconomics to different degrees, and we would have to handwave new numbers into existence to calculate things and probably end up with something like a .1% or .2% shift in probabilities.

And this is all if we have perfect study design, there's no confounders, so on and so forth. It would take a lot of work. The best case scenario is that all that work would be for a single digit probability shift, and the realistic case is that there's flaw somewhere in the process, or we simply misinterpret the result (my guess is that people can't deal with a 2% shift correctly and just think "now there's evidence" and count the theory as a little more confirmed) then we'll actually be giving ourselves negative knowledge.

I'm not saying Bayes isn't useful, but it's useful when we have a lot of numbers, when we're willing to put in a very large amount of work, and where there's something clear and mathematical we can do with the output.

Comment author: ChrisHibbert 21 June 2009 04:36:30AM 0 points [-]

I recently read The Cult of Statistical Significance. I realize that it's de rigeur to quote significance, but Ziliak and McCloskey insist that I ask what's the hypothesized size of the effect?

If we run three conditions, and end up with 4, 5, and 6 people getting some improvement, and calculate statistical significance, we obfuscate the fact that the difference is in the noise. If the same tests end up with 2, 4 and 8 people improving according to some metric, then we have stronger reason to suspect something is going on. Size matters. It's usually more interesting than statistical significance.