I didn't remember that test from earlier, either. Worth checking out? I don't mind accidentally unblinding a little if it is an experimental/control difference - curious folks will be curious.
I just went through the whole thing again; there was no test of that kind at the end. (What there was was the previous multiple-choice quiz about some example forecasts and how they went wrong.) Looks like this is an experimental/control difference. I'd rather not discuss that bit further - this isn't about possibly life-or-death drugs, after all, and I already know where I can find calibration tests like that.
A tournament is currently being initiated by the Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take).
I'm registered, and wondering if any other LessWrongers are participating/considering it. It could be interesting to compare methods and results.
Extensive quotes and links below the fold.
A general description of the expected benefits for volunteers:
Could that be any more LessWrong-esque?
More info: http://goodjudgmentproject.blogspot.com/
Pre-Register: http://surveys.crowdcast.com/s3/ACERegistration