Our hosts at Tricycle Developments have created PredictionBook.com, which lets you make predictions and then track your calibration - see whether things you assigned a 70% probability happen 7 times out of 10.
The major challenge with a tool like this is (a) coming up with good short-term predictions to track (b) maintaining your will to keep on tracking yourself even if the results are discouraging, as they probably will be.
I think the main motivation to actually use it, would be rationalists challenging each other to put a prediction on the record and track the results - I'm going to try to remember to do this the next time Michael Vassar says "X%" and I assign a different probability. (Vassar would have won quite a few points for his superior predictions of Singularity Summit 2009 attendance - I was pessimistic, Vassar was accurate.)
It seems to me like you're asking about 2 different issues: the first is not desiring to be penalized for making low-probability bets; but that should be handled already by low confidences - if you figure it at 1 in 5, then after only a few failed bets things should and ought to start looking bad for you, but if at 1 in thousands, each failed prediction ought to affect your score very little.
Presumably PredictionBook is offering richer rewards for low-probability successes, just like a 5% share on a prediction market pays out (proportionately) much more than a 95% share would; on net you would do the same.
The second issue is that you seem to think that certain predictions are simply harder to predict better than chance, and that you should be rewarded for going out on a limb? (20% odds on a big market bet tomorrow is much more detailed than the default 1-in-thousands-chance-per-day prediction.)
I don't know what the fair reward here is. If few people are making that prediction at all, then it should be easy to do better than them. In prediction markets, one expects that unpopular markets will be easier to arbitrage and beat - the thicker the market, the more efficient; standard economics. So in a sense, unpopular predictions are their own reward.
But this doesn't prevent making obscure predictions ('will I remember to change my underwear tomorrow?') Nor would it seem to adequately cover 'big questions' like open scientific puzzles or predictions about technological development (think the union of Longbets & Intrade). Maybe there could be a bonus for having predictions pay out with confidence levels higher than the average? This would attract well-calibrated people to predictions where others are not informed or are too pessimistic.