Our hosts at Tricycle Developments have created PredictionBook.com, which lets you make predictions and then track your calibration - see whether things you assigned a 70% probability happen 7 times out of 10.
The major challenge with a tool like this is (a) coming up with good short-term predictions to track (b) maintaining your will to keep on tracking yourself even if the results are discouraging, as they probably will be.
I think the main motivation to actually use it, would be rationalists challenging each other to put a prediction on the record and track the results - I'm going to try to remember to do this the next time Michael Vassar says "X%" and I assign a different probability. (Vassar would have won quite a few points for his superior predictions of Singularity Summit 2009 attendance - I was pessimistic, Vassar was accurate.)
Note that there are some large classes of predictions which by nature will strongly cluster and won't show up until a fair bit in the future. For example there are various AI related predictions going about 100 years out. You've placed bets on 12 of them by my count. They strongly correlate with each other (for example general AI by 2018 and general AI by 2030). For that sort of issue it is very hard to notice domain related correlation when almost nothing in the domain has reached its judgement date yet. There are other issues with this sort of thing as well, such as a variety of the long-term computational complexity predictions (I'm ignoring here the Dick Lipton short-term statements which everyone seems to think are just extremely optimistic.). Have there been enough different domains that have had a lot of questions that one could notice domain specific predictions?
All that is true - and why it was the last and least of my points, and in parentheses even.