It seems to me that this blog has just reached it's first real crisis.
Three people are announcing three apparently opposed beliefs with substantial real expected consequences and yet no-one has yet spoken, or it seems to me implied, the key slogan... "LETS USE SCIENCE!" or, as hubristic Bayesian wannabes, not invoked Bayes as an idol to swear by, but rather said "LETS USE HUMANE REFLECTIVE DECISION THEORY, THE QUANTITATIVELY UNKNOWN BUT QUALITATIVELY INTUITED POWER DEEPER THAN SCIENCE FROM WHICH IT STEMS AND TO WHICH OUR COMMUNITY IS DEVOTED".
IF RDS was applied to our current situation, people would be analyzing Yvain's, Davis' and Eby's proposals, working out exactly what their implications are, and trying to propose, in the name of SCIENCE, hypotheses which will distinguish between them, and in the name of BAYES, confidence estimates of their analyses and of the quality with which the denotations of their words have cleaved reality at the joints enabling an odds ratio of updating to be extracted from a single data point. People would be working out what features of which of the models used by Yvain, Davis and Eby constitute evidence against what other features. They would be trying to evaluate non-verbally, through subjectively opaque but known-to-be-informative processes vulnerable to verbal overshadowing, what relative odds to place on those different features of the models. Finally, they would be examining the expected costs entailed by experiments being proposed and selecting those experiments which promise to provide the most information for the least cost be performed. The cost estimate would include both the effort required to perform the experiments, probably best assessed with an outside view in most cases like these, and the dangers to the minds of the participants from possible adverse outcomes, taking into account, as well as possible, the structural uncertainty of the models.
I sincerely hope to see some of that in the comments section soon, either under this post or the "Applied Picoeconomics" post.
I don't think it's second-order good epistemology trying and succeeding to counter bad epistemology.
Let's say we run a study with 30 people, and we conclude ZM's method is the best, with p = .55 (sorry, I don't think in Bayesian when I have my psychology experimentation cap on), which is realistic for that kind of sample and the variability we can expect. Now what?
We could come up with some kind of hokey prior, like that there's a 33% chance each of our techniques is best, then apply that and end up with maybe a 38% chance ZM's is best and a 31% chance mine and Pjeby's are best (no, I didn't actually do the math there). But first of all, that prior is hokey. Pjeby's a professional anti-procrastination expert, and we're giving him the same prior as me and Z.M. Davis? Second of all, we still don't really know what "best" means, and it's entirely possible different methods are best for different people in complex ways. Third, I don't trust anyone including myself to know what to do with a 7% chance. I like my method better; should I give that up just because a very small study ended up shifting the probabilities 7% toward ZM? Fourth of all, we still wouldn't know how to apply this to picoeconomics as a theory: using any technique will increase success by placebo effect alone, we have several techniques that all use picoeconomics to different degrees, and we would have to handwave new numbers into existence to calculate things and probably end up with something like a .1% or .2% shift in probabilities.
And this is all if we have perfect study design, there's no confounders, so on and so forth. It would take a lot of work. The best case scenario is that all that work would be for a single digit probability shift, and the realistic case is that there's flaw somewhere in the process, or we simply misinterpret the result (my guess is that people can't deal with a 2% shift correctly and just think "now there's evidence" and count the theory as a little more confirmed) then we'll actually be giving ourselves negative knowledge.
I'm not saying Bayes isn't useful, but it's useful when we have a lot of numbers, when we're willing to put in a very large amount of work, and where there's something clear and mathematical we can do with the output.
I recently read The Cult of Statistical Significance. I realize that it's de rigeur to quote significance, but Ziliak and McCloskey insist that I ask what's the hypothesized size of the effect?
If we run three conditions, and end up with 4, 5, and 6 people getting some improvement, and calculate statistical significance, we obfuscate the fact that the difference is in the noise. If the same tests end up with 2, 4 and 8 people improving according to some metric, then we have stronger reason to suspect something is going on. Size matters. It's usually more interesting than statistical significance.