XFrequentist comments on Comments on "When Bayesian Inference Shatters"? - Less Wrong

8 Post author: Crystalist 07 January 2015 10:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: XFrequentist 08 January 2015 05:18:25AM 4 points [-]

I call forth the mighty Cyan!

Comment author: Cyan 08 January 2015 06:05:09PM *  8 points [-]

I like it when I can just point folks to something I've already written.

The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predicts absolute disaster (or any result) just for that specific data set, and is otherwise well-behaved.

As the authors themselves put it:

The mechanism causing this “brittleness” has its origin in the fact that, in classical Bayesian Sensitivity Analysis, optimal bounds on posterior values are computed after the observation of the specific value of the data, and that the probability of observing the data under some feasible prior may be arbitrarily small... This data dependence of worst priors is inherent to this classical framework and the resulting brittleness under finite-information can be seen as an extreme occurrence of the dilation phenomenon (the fact that optimal bounds on prior values may become less precise after conditioning) observed in classical robust Bayesian inference.

Comment author: IlyaShpitser 08 January 2015 06:55:30PM 2 points [-]

Thanks for your link!