You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

b_sen comments on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 - Less Wrong Discussion

8 Post author: b_sen 16 February 2015 01:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (189)

You are viewing a single comment's thread. Show more comments above.

Comment author: b_sen 19 February 2015 04:32:33AM *  0 points [-]

How is PredictionBook for sharing evidence for one's predictions, back-and-forth discussion, logging 'categories' of predictions, detailed statistics (such as calibration changes over time, more granularity than 10% increments, etc.) and so on? Are there any specific features of PredictionBook you would recommend to me?

I ask because:

  • I want to participate in readable discussion as well as log my predictions; browsing as a not-logged-in user, PredictionBook appears to have a much less readable presentation for discussions.
  • I often have domain-specific prediction techniques and would want to check my calibration for each technique (and each domain) as well as overall. (To take MOR predictions as an example, I might make some predictions based on a feeling of "this looks like foreshadowing", others based on looking for themes, still others based on knowledge outside MOR itself, ... ) Come to think of it, both domains and categories can also overlap, but I still want to have that kind of feature available whether I make the graphs myself or not.

I do intend to check my calibration on my MOR predictions once MOR ends, regardless of where I put the predictions up.

Comment author: ChristianKl 19 February 2015 11:58:23AM 0 points [-]

Sharing your prediction in this thread doesn't get others to share their own numbers. Predictionbook on the other hand usually does. It leads to communal prediction making.

It also gives you calibration statistics.

I often have domain-specific prediction techniques and would want to check my calibration for each technique (and each domain) as well as overall.

You could have a account for every prediction technique.

Comment author: b_sen 20 February 2015 07:53:26PM 0 points [-]

You could have a account for every prediction technique.

That seems like an excessive amount of work, especially once overlapping categories and domains come into play.