You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HonoreDB comments on A question about Eliezer - Less Wrong Discussion

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: HonoreDB 19 April 2012 09:15:53PM 25 points [-]

it didn't treat mild belief and certainty differently;

It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!

They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).

They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?

I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.

If you think you can improve on their methodology, the full dataset is here: .xls.

Comment author: HonoreDB 19 April 2012 09:33:23PM 7 points [-]

Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.

Comment author: Douglas_Knight 20 April 2012 01:20:17AM 2 points [-]

While that may be best with current PB, I think conditional predictions are useful.

If you are only interested in truth values and not the strength of the prediction, then it is logically equivalent, but the number of points you get is not the same. The purpose of a conditional probability is to take a conditional risk. If Romney is nominated, you get a gratuitous point for this prediction. Of course, simply counting predictions is easy to game, which is why we like to indicate the strength of the prediction, as you do with this one on PB. But turning a conditional prediction into an absolute prediction changes its probability and thus its effect on your calibration score. To a certain extent, it amounts to double counting the prediction about the hypothesis.

Comment author: drethelin 19 April 2012 09:44:44PM -1 points [-]

This is less specific than the first prediction. The second version loses the part where you predict obama will beat romney

Comment author: Randaly 19 April 2012 09:52:59PM 4 points [-]

The first version doesn't have that part either- he's predicting that if Romney gets eliminated in the primaries, ie Gingrich, Santorum, or Paul is the Republican nominee, then Obama will win.

Comment author: drethelin 19 April 2012 09:56:45PM 4 points [-]

you're right, I misread.

Comment author: Larks 20 April 2012 06:11:46AM 3 points [-]

it didn't treat mild belief and certainty differently;

... they did use it in their regressions.

Sure, so we learn about how confidence is correlated with binary accuracy. But they don't take into account that being very confident and wrong should be penalised more than being slightly confident and wrong.

Per the paper, those were simply thrown out

I misread; you are right