You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open thread, Mar. 14 - Mar. 20, 2016 - Less Wrong Discussion

3 Post author: MrMind 14 March 2016 08:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (212)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dagon 14 March 2016 03:55:10PM 0 points [-]

The vast majority of yes/no questions you're likely to face won't support 5% intervals. You're just not going to get enough data to have any idea whether the "true" calibration is what actually happens for that small selection of questions.

That said, I agree there's an analytic flaw if you can change true to false on no additional data (kind of: you noticed salience of something you'd previously ignored, which may count as evidence depending on how you arrived at your prior) and only reduce confidence a tiny amount.

One suggestion that may help: don't separate your answer from your confidence confidence, just calculate a probability. Not "true, 60% confidence" (implying 40% unknown, I think, not 40% false), but "80% likely to be true". It really makes updates easier to calculate and understand.

Comment author: ChristianKl 14 March 2016 07:22:38PM 2 points [-]

The vast majority of yes/no questions you're likely to face won't support 5% intervals. You're just not going to get enough data to have any idea whether the "true" calibration is what actually happens for that small selection of questions.

Tetlock found in the Good Judgement Project as described in his book Superforcasting that people who are excellent at forcasting do very finely grained predictions.