You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on XKCD - Frequentist vs. Bayesians - Less Wrong Discussion

18 Post author: brilee 09 November 2012 05:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 09 November 2012 02:07:53PM 5 points [-]

Good frequentists do that. The method itself doesn't promote this good practice.

Comment author: FiftyTwo 09 November 2012 04:18:52PM 7 points [-]

And bad Bayesians use crazy priors,

Comment author: Luke_A_Somers 09 November 2012 06:37:10PM 11 points [-]

1) There is no framework so secure that no one is dumb enough to foul it up.

2) By having to use a crazy prior explicitly, this brings the failure point forward in one's attention.

Comment author: FiftyTwo 09 November 2012 07:05:48PM 1 point [-]

I agree, but noticing 2 requires looking into how they've done the calculations, so simply knowing its bayesian isn't enough.

Comment author: khafra 09 November 2012 08:48:19PM 0 points [-]

It might be enough. If it's published in a venue where the authors would get called on bullshit priors, the fact that it's been published is evidence that they used reasonably good priors.

Comment author: JonathanLivengood 09 November 2012 09:43:34PM 1 point [-]

The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian -- the kind of Bayesian for which all of the nice coherence results apply -- then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don't accept any such calibration constraint.

Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. "You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!"

One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.

Comment author: JonathanLivengood 09 November 2012 06:42:30PM 2 points [-]

That depends heavily on what "the method" picks out. If you mean that the machinery of a null hypothesis significance test against a fixed-for-all-time significance level of 0.05, then I agree, the method doesn't promote good practice. But if we're talking about frequentism, then identifying the method with null hypothesis significance testing looks like attacking a straw man.

Comment author: Luke_A_Somers 12 November 2012 03:52:34AM 2 points [-]

I know a bunch of scientists who learned a ton of canned tricks and take the (frequentist) statisticians' word on how likely associations are... and the statisticians never bothered to ask how a priori likely these associations were.

If this is a straw man, it is one that has regrettably been instantiated over and over again in real life.