You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Larks comments on A question about Eliezer - Less Wrong Discussion

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: HonoreDB 19 April 2012 09:15:53PM 25 points [-]

it didn't treat mild belief and certainty differently;

It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!

They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).

They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?

I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.

If you think you can improve on their methodology, the full dataset is here: .xls.

Comment author: Larks 20 April 2012 06:11:46AM 3 points [-]

it didn't treat mild belief and certainty differently;

... they did use it in their regressions.

Sure, so we learn about how confidence is correlated with binary accuracy. But they don't take into account that being very confident and wrong should be penalised more than being slightly confident and wrong.

Per the paper, those were simply thrown out

I misread; you are right