You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on Open thread, Mar. 14 - Mar. 20, 2016 - Less Wrong Discussion

3 Post author: MrMind 14 March 2016 08:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (212)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dagon 14 March 2016 03:55:10PM 0 points [-]

The vast majority of yes/no questions you're likely to face won't support 5% intervals. You're just not going to get enough data to have any idea whether the "true" calibration is what actually happens for that small selection of questions.

That said, I agree there's an analytic flaw if you can change true to false on no additional data (kind of: you noticed salience of something you'd previously ignored, which may count as evidence depending on how you arrived at your prior) and only reduce confidence a tiny amount.

One suggestion that may help: don't separate your answer from your confidence confidence, just calculate a probability. Not "true, 60% confidence" (implying 40% unknown, I think, not 40% false), but "80% likely to be true". It really makes updates easier to calculate and understand.

Comment author: skeptical_lurker 14 March 2016 05:39:05PM *  0 points [-]

The vast majority of yes/no questions you're likely to face won't support 5% intervals.

I agree [edit: actually, it depends on where these yes/no questions are coming from] , but think the questions I was looking at were in the small minority that do support 5% intervals.

Not "true, 60% confidence" (implying 40% unknown, I think, not 40% false)

Perhaps I should have provided more details to explain exactly what I did, because I actually did mean 60% true 40% false.

So, I already was thinking in the manner you advocate, but thanks for the advice anyway!