Zian comments on Beautiful Probability - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
Emil, thanks, fixed.
Doug, your analogy is not valid because a biased reporting method has a different likelihood function to the possible prior states, compared to an unbiased one. In this case the single, fixed dataset that we see, has a different likelihood to the possible prior states, depending on the reporting method.
If a researcher who happens to be thinking biased thoughts carries out a fixed sequence of experimental actions, the resulting dataset we see does not have a different likelihood function to the possible prior states. All that a Bayesian needs to know is the experimental actions that were actually carried out and the data that was actually observed - not what the researcher was thinking at the time, or what other actions the researcher might have performed if things had gone differently, or what other dataset might then have been observed. We need only consider the actual experimental results.
Londenio, see Ron's comment - it's not a strawperson.
Great point but I worry that people will point to this post and say "See? Publication bias/questionable study design/corporate funding/varying peer review processes don't matter!"
In other words, it's good to strive for a fixed experimental process but reality is rarely that tidy.