I explain what I've learned from creating and judging thousands of predictions on personal and real-world matters: the challenges of maintenance, the limitations of prediction markets, the interesting applications to my other essays, skepticism about pundits and unreflective persons' opinions, my own biases like optimism & planning fallacy, 3 very useful heuristics/approaches, and the costs of these activities in general.
Plus an extremely geeky parody of Fate/Stay Night.
This essay exists as a large section of my page on predictions markets on gwern.net
: http://www.gwern.net/Prediction%20markets#1001-predictionbook-nights
Sorry, I should have said "worse than random". To do worse than random, one would have to take a source of good predictions and twist it into a source of bad ones. The only plausible explanation I could think of for this is that you know a group of people who are good at predicting and habitually disagree with them. It seems like there should be far less such people than there are legitimate good predictors.
It's easy to lose to an efficient market if you're not playing the efficient market's games. If you take your stated probability and the market's implied probability and make a bet somewhere in between, you are likely to lose money over time.
It seems to me that this is exactly the sort of thing that can really happen in politics. Suppose you have two political parties, the Greens and the Blues, and that for historical reasons it happens that the Greens have adopted some ways of thinking that actually work well, and the Blues make it their practice to disagree with everything distinctive that the Greens say.
(And it could easily happen that there are more Blues than Greens, in which case you'd get lots of systematically bad predictors.)