RichardKennaway comments on Comments on "When Bayesian Inference Shatters"? - Less Wrong

8 Post author: Crystalist 07 January 2015 10:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 08 January 2015 09:28:38PM 1 point [-]

I think energy distance doesn't work: the "notched" distributions that their work uses lie close to the original distribution in that distance, as they do for total variation and Prokhorov distance. I am guessing that Kullback-Leibler doesn't work either, provided the notches don't go all the way to zero. You just make the notch low enough to get a high probability for the desired posterior, then make it narrow enough to reduce KL divergence as low as you want.

If it is assumed that the observations are only made to finite precision (e.g. each observation takes the form of a probability distribution of entropy bounded from below) it's not clear to me what happens to their results. In terms of their examples, they depend on being able to narrow the notch arbitrarily and still contain the observed data with certainly. That can't be done if the data are only known with bounded precision.