jimrandomh comments on Confidence levels inside and outside an argument - Less Wrong

129 Post author: Yvain 16 December 2010 03:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 18 December 2010 08:26:27PM 5 points [-]

I think this is a form of double-counting the same evidence. You can only perform Bayesian updating on information that is new; if you try to update on information that you've already incorporated, your probability estimate shouldn't move. But if you take information you've already incorporated, shuffle the terms around, and pretend it's new, then you're introducing fake evidence and get an incorrect result. You can add a term for "Bayesian updating might not work" to any model, except to a model that already accounts for that, as models of the probability that Bayesian updating works surely do. That's what's happening here; you're adding "there is an epsilon probability that Bayesian updating doesn't work" as evidence to a model that already uses and contains that information, and counting it twice (and then counting it n times).

Comment author: shokwave 19 December 2010 05:42:20AM *  0 points [-]

You can also fashion a similar problem regarding priors.

  • Determine what method you should use to assign a prior in a certain situation.

  • Then determine what method you should use to assign a prior to "I picked the wrong method to assign a prior in that situation".

  • Then determine what method you should to assign a prior to "I picked the wrong method to assign a prior to "I picked the wrong method to assign a prior in that situation" ".

This doesn't seem like double-counting of anything to me; at no point can you assume you have picked the right method for any prior-assigning with probability 1.

Comment author: jimrandomh 19 December 2010 01:03:43PM 0 points [-]

This one is different, in that the evidence you're introducing is new. However, the magnitude of the effect of each new piece of evidence on your original probability falls off exponentially, such that the original probability converges.