Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wedrifid comments on Bayesian Flame - Less Wrong

37 Post author: cousin_it 26 July 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 27 July 2009 01:04:01PM 1 point [-]

How about this: a Bayesian will always predict that she is perfectly calibrated, even though she knows the theorems proving she isn't.

Wanna bet? Literally. Have a Bayesian to make and a whole bunch of predictions and then offer her bets with payoffs based on what apparent calibration the results will reflect. See which bets she accepts and which she refuses.

Comment author: Cyan 27 July 2009 01:22:43PM 1 point [-]

Are you volunteering?

Comment author: wedrifid 27 July 2009 01:43:55PM 0 points [-]

Sure. :)

But let me warn you... I actually predict my calibration to be pretty darn awful.

Comment author: Cyan 27 July 2009 03:00:29PM 0 points [-]

We need a trusted third party.

Comment author: wedrifid 27 July 2009 03:23:27PM 0 points [-]

Find a candidate.

I was about to suggest we could just bet raw ego points by publicly posting here... but then I realised I prove my point just by playing.

It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump. There are few cases in which I would expect my calibration to be superior to what you could predict with complete knowledge of the distribution.

Comment author: Cyan 27 July 2009 03:33:34PM *  1 point [-]

It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump.

Phooey. There goes plan A.

Comment author: wedrifid 27 July 2009 03:56:39PM 0 points [-]

;)

Comment author: Cyan 27 July 2009 04:11:02PM 0 points [-]

Plan B involves trying to use some nasty posterior inconsistency results, so don't think you're out of the woods yet.

Comment author: wedrifid 27 July 2009 04:40:58PM *  0 points [-]

I am convinced in full generality that being offered the option of a bet can only provide utility >= 0. So if the punch line is 'insuficiently constrained rationality' then yes, the joke is on me!

And yes, I suspect trying to get my head around that paper would (will) be rather costly! I'm a goddam programmer. :P