If the probabilities that Wang’s model computes for each state were right, you could have used the resulting probability distribution of the outcomes in the electoral college to straightforwardly derive the probability that Clinton was going to win, which is just the probability that she gets at least 270 votes in the electoral college.
No. Even if Wang had reasonable probabilities of Clinton individually winning in each state, the aggregation procedure described in the post (I haven't checked if this is what Wang actually did) for using these probabilities to get a probability that Clinton will win the election assumes that winning each state is independent, which is a completely ridiculous assumption. Most sources of uncertainty about elections are correlated between states; for example, widely publicized news stories that make Clinton or Trump look bad a certain number of days before the election. The independence assumption horrendously exaggerates the probability of Clinton winning given that she has a slight edge.
Also, to be clear, in order to compute his prediction, Wang did assume that non-sampling errors were somewhat correlated, just not nearly enough. As I say in the post, he is a very smart guy, so it's not as if he didn't know the things I explain.
I agree with you that the probabilities of Clinton winning individual states are correlated, but I'm not sure this makes what I wrote false, although you're probably right that it's a bit misleading. The fact that the probabilities of Clinton winning individual states are correlated is only relevant to calculate the probabilities for each possible outcome in the electoral college. It means that, as I explain later in my post, you have to take into account the fact that non-sampling polling errors in different states are correlated in order to calculate the probabilities for each possible outcome in the electoral college. One of the sources of non-sampling error that I describe in my post is measurement error, which if you read my post carefully I define in such a way that if someone doesn't vote for the candidate they claimed they would vote for when they participated to a survey for whatever reason (e. g. because they heard a news story that made Clinton or Trump look bad), it counts as measurement error. I agree that it's probably an unusual definition of this concept, which is typically construed more narrowly. But I defined measurement error in that unusually broad way precisely because I didn't want to introduce the complication that, even if someone who tells a pollster n days before the election that he's going to vote for X and would really vote for X if the election took place on the day he participated to that survey, he might not vote for X on election day. (Wang takes that, among other things, into account in order to calculate his prediction, but I was only describing the way in which he calculates a snapshot of where the race stands at any given time, since I think it's where the most interesting mistakes were made. I may be wrong about that, but judging by what he said after the election, I think Wang would agree with me on that.) Now, if the probabilities you calculated for each possible outcome in the electoral college are correct, then you can just use the aggregation method I describe above the passage you quoted in my post. What is misleading in my post is that I say the assumption for that method to be reliable is that the probabilities of Clinton winning individual states are correct (instead of the probabilities for each possible outcome in the electoral college), because it suggests that we can assume they are probabilistically independent (although I never said that and the rest of my post makes clear that I wasn't making that assumption), which of course they are not. Do you agree with that or do you think that there is a more serious problem here?
I was just reading my post again, and I guess this passage is also misleading, for exactly the same reason: "if you had calculated a probability that Clinton was going to win in each state using the method I explained above (which you then use to compute a probability that Clinton is going to win the electoral college)".
According to the emails leaked by Wikileaks, the pre-election polls presented in the media used a technique called oversampling to misrepresent the results.
Sources:
Relevant Quotes:
Interpretation:
BTW shameless plug for my fake news aggregator: https://quibbler.press/#/about
Agree with the post proper. I think the headline is technically accurate but potentially misleading, because poll-dominated models aren't the only kind of election models. Political scientists build models that rely more on fundamentals like economic statistics and military activity, and when Vox averaged 6 of those models together, they predicted that Trump would win the popular vote. The headline remains technically correct because predicting that Trump would win the popular vote isn't the same as predicting Trump would win the election, but it'd be a shame if people walked away with the idea that election models in toto said Clinton would win.
I think models that rely on fundamentals are worthless. I don't have time to explain why in details, though perhaps I will post something on that at some point, but if you want to know the gist of my argument, it's that models of that kind are massively underdetermined by the evidence.
OK. That's interesting. I disagree but I can see why you'd think that, and in a way I'm kind of sympathetic: I think overfitting definitely happens with some of the poli. sci. models. My go-to model is my go-to exactly because its author really seems to appreciate the overfitting issue, and is very insistent on aiming for proper explanation, not just prediction.
well, there were "mainstream" polls (used as a propaganda in the proclintonian media), sampled a bit over 1000, sometimes less, often massively oversampling registered Dem. voters... what do you expect?
and there was the biggest poll of 50000 (1000 per state) showing completely different picture (and of course used as a prooaganda in the anticlintonian, usually non-mainstream media)
google "election poll 50000"
A cursory glance through Fivethirtyeight's collected poll data shows a survey with over 84,000 voters (CCES/YouGov) giving Clinton a +4 percentage point lead, with 538 adjusting that to +2. Google and SurveyMonkey routinely had surveys of 20,000+ individuals, with one SurveyMonkey one having 70,000 with Clinton +5 (+4 adjusted). There was no clear reason to prefer your poll (whichever that one was) over these. https://projects.fivethirtyeight.com/2016-election-forecast/national-polls/
And it should go without saying that Clinton did end up at +2 nationally.
I'm not sure you have read my post. Nowhere in it do I say that we should have focused on one poll rather than another. So I'm not sure what relevance your comment has.
Its relevance is that it rebuts tukabel's suggestion that "the biggest poll" was of "50000" people and showed a "completely different picture" to the mainstream polls indicating a Clinton lead.
I'm sure pollsters sometimes "cheat" by constructing biased samples, but this can happen even if you're honest because, as I explain in my post, polling is really difficult to do. To my mind, the problem had more to do with commentators who were making mistaken inferences based on the polls, than with the polls themselves, although evidently some of them got things badly wrong.