Political pundits and political experts usually don't do much better than chance when forecasting political events, and usually do worse than crude statistical models.
An important caveat to this, though I can't recall whether Silver mentions it: Tetlock (2010) reminds people that he never said experts are "as good as dart-throwing chimps." E.g. experts do way better than chance (or a linear prediction rule) when considering the space of all physically possible hypotheses rather than when considering a pre-selected list of already-plausible answers. E.g. the layperson doesn't even know who is up for election in Myanmar. The expert knows who the plausible candidates are, but once you narrow the list to just the plausible candidates, then it is hard for experts to out-perform a coin flip or an LPR by very much.
One interesting fact from Chapter 4 (on weather predictions) that seems worth mentioning: Weather forecasters are also very good at manually and intuitively (i.e. without some rigorous mathematical method) fixing the predictions of their models. E.g. they might know that model A always predicts rain a hundred miles or so too far west from the Rocky Mountains. So to fix this, they take the computer output and manually redraw the lines (demarking level sets of precipitation) about a hundred miles east, and this significantly improves their forecasts.
Also: the national weather service gives the most accurate weather predictions. Everyone else will exaggerate to a greater or lesser degree in order to avoid getting flak from consumers about, e.g., rain on their wedding day (because not-rain or their not-wedding day is far less of a problem).
When gauging the strength of a prediction, it's important to view the inside view in the context of the outside view. For example, most medical studies that claim 95% confidence aren't replicable, so one shouldn't take the 95% confidence figures at face value.
This implies that the average prior for a medical study is below 5%. Does he make that point in the book? Obviously you shouldn't use a 95% test when your prior is that low, but I don't think most experimenters actually know why a 95% confidence level is used.
The invention of the printing press may have given rise to religious wars on account of facilitating the development of ideological agendas.
Uh, what? Is this meant to suggest that there were no religious wars before the printing press?
Nope, just ambiguity in English. You read "may have given rise to [all/most] religious wars" when he meant "may have given rise to [some] religious wars."
In general, though, it's exhausting to constantly attempt to write things in a way that minimizes uncharitable interpretations, so readers have an obligation not to jump to conclusions.
Ah, I see the ambiguity now. My mistake; thank you for the explanation!
EDIT: You know, I keep getting tripped up by this sort of thing. I don't know if it's because English isn't my first language (although I've known it for over two decades), or if it's just a general failing. Anyway, correction accepted.
As a part of my work for MIRI on the "Can we know what to do about AI?" project, I read Nate Silver's book The Signal and the Noise: Why So Many Predictions Fail — but Some Don't. I compiled a list of the takeaway points that I found most relevant to the project. I think that they might be of independent interest to the Less Wrong community, and so am posting them here.
Because I've paraphrased Silver rather than quoting him, and because the summary is long, there may be places where I've inadvertently misrepresented Silver. A reader who's especially interested in a point should check the original text.
Main Points
Chapter Summaries
Introduction
Increased access to information can do more harm than good. This is because the more information is available, the easier it is for people to cherry-pick information that supports their pre-existing positions, or to perceive patterns where there are none.
The invention of the printing press may have given rise to religious wars on account of facilitating the development of ideological agendas.
Chapter 1: The failure to predict the 2008 housing bubble and recession
Chapter 2: Political Predictions
Chapter 3: Baseball predictions
Chapter 4: Weather Predictions
Chapter 5: Earthquake predictions:
Chapter 6:
Chapter 7: Disease Outbreaks
Chapter 8: Bayes' Theorem
Chapter 9: Chess computers
Chapter 10: Poker
Chapter 11: The stock market
Chapter 12: Climate change
Chapter 13: Terrorism