I think that has been fixed. At one point I was ranked #8 in my team, and I finished #2, with an aggregate Brier score of .34, quite close to the leader at .33. Unfortunately that isn't much to brag about, as my team fell off the team leaderboard altogether - the top team had an aggregate score of .28.
As XFrequentist mentioned last August, "Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take)."
You can pre-register here.
Last year, approximately 2400 forecasters were assigned to one of eight experimental conditions. I was the #1 forecaster in my condition. It was fun, and I learned a lot, and eventually they are going to give me a public link so that I can brag about this until the end of time. I'm participating again this year, though I plan to regress towards the mean.
I'll share the same info XFrequentist did last year below the fold because I think it's all still relevant.
A general description of the expected benefits for volunteers:
Could that be any more LessWrong-esque?
More info: http://goodjudgmentproject.blogspot.com/