JGWeissman comments on Confidence levels inside and outside an argument - Less Wrong

129 Post author: Yvain 16 December 2010 03:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 16 December 2010 06:47:33PM 2 points [-]

Indeed. It looks like the effect I described occurs when the meta uncertainty is over a small range of log-odds values relative to the posterior log-odds, and there is another effect that could produce arbitrary expected probabilities given the right distribution over an arbitrarily large range of values. For any probability p, let L(B|E) = average + (1-p)*x with probability p and L(B|E) = average - p*x with probability (1-p), and then the limit of the expected probability as x approaches infinity is p.

It has a global minimum at x=2.

I notice that this is where |1 + x| = |1 - 2x|. That might be interesting to look into.

(Possible more rigorous and explicit math to follow when I can focus on it more)

Comment author: GuySrinivasan 17 December 2010 04:50:20AM 1 point [-]

I let L(B|E) be uniform from x-s/2 to x+s/2 and got that P(B|E) = where A is the odds if L(B|E)=x. In the limit as s goes to infinity, it looks like the interesting pieces are a term that's the log of the prior probability dropping off as s grows linearly, plus a term that eventually looks like (1/s)*ln(e^(s/2))=1/2 which means we approach 1/2.