You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

alienist comments on Comments on "When Bayesian Inference Shatters"? - Less Wrong Discussion

8 Post author: Crystalist 07 January 2015 10:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: alienist 09 January 2015 02:38:10AM *  10 points [-]

Not sure why you're being downvoted; the metric used to define "similar" or "closeness" is absolutely what's at issue here.

Any metric whereby a 51% percent coin isn't close to a fair coin is useless in practice.

Comment author: roystgnr 12 January 2015 05:15:01PM 0 points [-]

I don't understand you. Neither "a 51% percent coin" nor "a fair coin" are probability distributions, and the choice of metric in question is "metric on spaces of probability distributions". Could you clarify?

Although, I could take your statement at face value, too. Want to make a few million $1 bets with me? We'll either be using "rand < .5" or "rand < .51" to decide when I win; since trying to distinguish between the two is useless you don't need to bother.

Comment author: Lumifer 12 January 2015 05:21:16PM 1 point [-]

Neither "a 51% percent coin" nor "a fair coin" are probability distributions

Of course they are, they represent Bernoulli distributions.

Comment author: roystgnr 13 January 2015 02:04:06PM -1 points [-]

You could call them Bernoulli distributions representing aleatory uncertainty on a single coin flip, I suppose. Bayesian updates of purely aleatory uncertainty aren't very interesting, though, are they? Your evidence is "I looked at it, it's heads", and your posterior is "It was heads that time".

I suppose you could add some uncertainty to the evidence; maybe we're looking at a coin flip through a blurry telescope? But in any context, Bernoulli distributions from a finite-dimensional probability distribution space mean that Bayesian updates on them are still well-posed. The concern here is that infinite-dimensional spaces of probability distributions don't always lead to well-posed Bayesian updates, depending on what metric you use to define well-posed. If there's also a concern that this can happen on Bernoulli distributions then I'd like to see an example; if not then that's a red herring.

Comment author: Lumifer 13 January 2015 03:52:24PM 1 point [-]

Also, once you are not limited to a single flip and can flip the coins multiple times, you graduate to binomial distributions which are highly useful and for which Bayesian updates are sufficiently interesting :-)

Comment author: roystgnr 15 January 2015 04:06:05AM 0 points [-]

I also don't understand the downvote. Is there a single sentence in the above post that's mistaken? If so then a correction would be appreciated.