50% predictions can be useful if you are systematic about which option you count as "yes". e.g., "I estimate a 50% chance that I will finish writing my book this year" is a meaningful prediction. If I am subject to standard biases, then we would expect this to have less than a 50% chance of happening, so the outcomes of predictions like this provide a meaningful test of my prediction ability.
2 conventions you could use for 50% predictions: 1) pose the question such that "yes" means an event happened and "no" is the default, or 2) pose the question such that "yes" is your preferred outcome and "no" is the less desirable outcome.
Actually, it is probably better to pick one of these conventions and use it for all predictions (so you'd use the whole range from 0-100, rather than just the top half of 50-100). "70% chance I will finish my book" is meaningfully different than "70% chance I will not finish my book"; we are throwing away information about possible miscalibrated by treating them both merely as 70% predictions.
Even better, you could pose the question however you like and also note when you make your prediction 1) which outcome (if either) is an event rather than the default and 2) which outcome (if either) you prefer. Then at the end of the year you could look at 3 graphs, one which looks at whether the outcome that you considered more likely occurred, one that looks at whether the (non-default) event occurred, and one which looks at whether your preferred outcome occurred.
Sorry, I misread your comment originally. You were careful to say that you were talking about 3 different biases, while most people say that there is a right way to orient each question.
But you weren't careful to say that calibration — the measure of over- and under-confidence — is different from bias. There are four questions here. Introducing new questions that make sense at 50% is irrelevant to the fact that calibration doesn't make sense at 50%. If we are just doing calibration, some of our tests are wasted. If we add a test of a bias, that part of the...
TL;DR: Prediction & calibration parties are an exciting way for your EA/rationality/LessWrong group to practice rationality skills and celebrate the new year.
On December 30th, Seattle Rationality had a prediction party. Around 15 people showed up, brought snacks, brewed coffee, and spent several hours making predictions for 2017, and generating confidence levels for those predictions.
This was heavily inspired by Scott Alexander’s yearly predictions. (2014 results, 2015 results, 2016 predictions.) Our move was to turn this into a communal activity, with a few alterations to meet our needs and make it work better in a group.
Procedure:
To make this work in a group, we recommend the following:
This makes a good activity for rationality/EA groups for the following reasons:
Some examples of the predictions people used:
Also relevant: