shokwave comments on Statistical Prediction Rules Out-Perform Expert Human Judgments - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (195)
I'm skeptical, and will now proceed to question some of the assertions made/references cited. Note that I'm not trained in statistics.
Unfortunately, most of the articles cited are not easily available. I would have liked to check the methodology of a few more of them.
The paper doesn't actually establish what you say it does. There is no statistical analysis of expert wine tasters, only one or two anecdotal statements of their fury at the whole idea. Instead, the SPR is compared to actual market prices - not to experts' predictions. I think it's fair to say that the claim I quoted is overreached.
Now, about the model and its fit to data. Note that the SPR is older than 1995, when the paper was published. The NYTimes article about it which you reference is from 1990 (the paper bizarrely dates it to 1995; I'm not sure what's going on there).
The fact that there's a linear model - not specified precisely anywhere in the article - which is a good fit to wine prices for vintages of 1961-1972 (Table 3 in the paper) is not, I think, very significant on its own. To judge the model, we should look at what it predicts for upcoming years. Both the paper and the NYTimes article make two specific predictions. First, the 1986 vintage, claimed to be extolled by experts early on, will prove mediocre because of the weather conditions that year (see Figure 3 in the paper - 1986 is clearly the worst of the 80ies). NYTimes says "When the dust settles, he predicts, it will be judged the worst vintage of the 1980's, and no better than the unmemorable 1974's or 1969's". The 1995 paper says, more modestly, "We should expect that, in due course, the prices of these wines will decline relative to the prices of most of the other vintages of the 1980s". Second, the 1989-1990 is predicted to be "outstanding" (paper), "stunningly good" (NYTimes), "adjusted for age, will outsell at a significant premium the great 1961 vintage (NYTimes)."
It's now 16 years later. How do we test these predictions?
First, I've stumbled on a different paper from the primary author, Prof. Ashenfelter, from 2007. Published 12 years later than the one you reference, this paper has substantially the same contents, with whole pages copied verbatim from the earlier one. That, by itself, worries me. Even more worrying is the fact that the 1986 prediction, prominent in the 1990 article and the 1995 paper, is completely missing from the 2007 paper (the data below might indicate why). And most worrying of all is the change of language regarding the 1989/1990 prediction. The 1995 paper says about its prediction that the 1989/1990 will turn out to be outstanding, "Many wine writers have made the same predictions in the trade magazines". The 2007 paper says "Ironically, many professional wine writers did not concur with this prediction at the time. In the years that have followed minds have been changed; and there is now virtually unanimous agreement that 1989 and 1990 are two of the outstanding vintages of the last 50 years."
Uhm. Right. Well, because the claims aren't strong enough, they do not exactly contradict each other, but this change leaves a bad taste. I don't think I should give much trust to these papers' claims.
The data I could find quickly to test the predictions is here. The prices are broken down by the chateaux, by the vintage year, the packaging (I've always chosen BT - bottle), and the auction year (I've always chosen the last year available, typically 2004). Unfortunately, Ashenfelter underspecifies how he came up with the aggregate prices for a given year - he says he chose a package of the best 15 wineries, but doesn't say which ones or how the prices are combined. I used 5 wineries that are specified as the best in the 2007 paper, and looked up the prices for years 1981-1990. The data is in this spreadsheet. I haven't tried to statistically analyze it, but even from a quick glance, I think the following is clear. 1986 did not stabilize as the worst year of the 1980s. It is frequently second- or third-best of the decade. It is always much better than either 1984 or 1987, which are supposed to be vastly better according to the 1995 paper's weather data (see Figure 3). 1989/1990 did turn out well, especially 1990. Still, they're both nearly always less expensive than 1982, which is again vastly inferior in the weather data (it isn't even in the best quarter). Overall, I fail to see much correlation between the weather data in the paper for the 1980s, the specific claims about 1986 and 1989/1990, and the market prices as of 2004. I wouldn't recommend using this SPR to predict market prices.
Now, this was the first example in your post, and I found what I believe to be substantial problems with its methodology and the quality of its SPR. If I were to proceed and examine every example you cite in the same detail, would I encounter many such problems? It's difficult to tell, but my prediction is "yes". I anticipate overfitting and shoddy methodology. I anticipate huge influence of the selection bias - the authors that publish these kinds of papers will not publish a paper that says "The experts were better than our SPR". And finally, I anticipate overreaching claims of wide-reaching applicability of the models, based on papers that actually indicate modest effect in a very specific situation with a small sample size.
I've looked at your second example:
I couldn't find the original paper, but the results are summarised in Dawes (1979). Looking at it, it turns out that when you say "predict marital happiness", it really means "predicts one of the partners' subjective opinion of their marital happiness" - as opposed to e.g. stability of the marriage over time. There's no indication as to how the partner to question was chosen from each pair (e.g. whether the experimenter knew the rate when they chose). There was very good correlation with binary outcome (happy/unhappy), but when a finer scale of 7 degrees of happiness was used, the correlation was weak - rate of 0.4. In a follow-up experiment, correlation rate went up to 0.8, but there the subject looked at the lovemaking/fighting statistics before opining on the degree of happiness, thus contaminating their decision. And even in the earlier experiment, the subject had been recording those lovemaking/fighting statistics in the first place, so it would make sense for them to recall those events when they're asked to assess whether their marriage is a happy one. Overall, the model is witty and naively appears to be useful, but the suspect methodology and the relatively weak correlation encourages me to discount the analysis.
Finally, the following claim is the single most objectionable one in your post, to my taste:
My own experience strongly suggests to me that this claim is inane - and is highly dangerous advice. I'm not able to view the papers you base it on, but if they're anything like the first and second example, they're far, far away from convincing me of the truth of this claim, which I in any case strongly suspect to overreach gigantically over what the papers are proving. It may be true, for example, that a very large body of hiring decision-makers in a huge organisation or a state on average make poorer decisions based on their professional judgement during interviews than they would have made based purely on the resume. I can see how this claim might be true, because any such very large body must be largely incompetent. But it doesn't follow that it's good advice for you to abstrain from interviewing - it would only follow if you believe yourself to be no more competent than the average hiring manager in such a body, or in the papers you reference. My personal experience from interviewing many, many candidates for a large company suggests that interviewing is crucial (though I will freely grant that different kinds of interviews vary wildly in their effectiveness).
What evidence do you have that you are better than average?
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
I have heard of one job interview that I felt constituted a useful tool that could not effectively be replaced by resume examination and statistical analysis. A friend of mine got a job working for a company that provides mathematical modeling services for other companies, and his "interview" was a several hour test to create a number of mathematical models, and then explaining to the examiner in layman's terms how and why the models worked.
Most job interviews are really not a demonstration of job skills and aptitude, and it's possible to simply bullshit your way through them. On the other hand, if you have a simple and direct way to test the competence of your applicants, then by all means use it.
I'm most familiar with interviews for programming jobs, where an interview that doesn't ask the candidate to demonstrate job-specific skills, knowledge and aptitude is nearly worthless. These jobs are also startlingly prone to resume distortion that can make vastly different candidates look similar, especially recent graduates.
Asking for coding samples and calling previous employers, especially if coupled with a request for code solving a new (requested) problem, could potentially replace interviews. However, judging the quality of code still requires a person, so that doesn't seem to really change things to me.
That's what I think of, too, when I hear the phrase "job interview". Is this not typical outside fields like programming?
I can confirm that such a "job interview" is not common in medicine. The potential employer generally relies on the credentialing process of the medical establishment. Most physicians, upon completing their training, pass a test demonstrating their ability to regurgitate the teachers' passwords, and are recommended to the appropriate certification board as "qualified" by their program director; to do otherwise would reflect badly on the program. Also, program directors are loath to remove a resident/fellow during advanced training because some warm body must show up to do the work, or the professor himself/herself might have to fill in. It is difficult to find replacements for upper level residents; the only common reason such would be available is dismissal/transfer from another program. Consequently, the USA turns out physicians of widely varied skill levels, even though their credentials are similar. In surgical specialities, it is not unusual for a particularly bright individual with all the passwords but very poor technical skills to become a surgical professor.
My mother has told me an anecdote about a family friend who was a surgeon who had a former student call him while conducting an operation because he couldn't remember what to do.
The (rumored) student has my respect. I would expect most surgeons to have too much of an ego to admit to that doubt rather than stumble ahead full of hubris. It would be comforting to know that your surgeon acted as if (as opposed to merely believing that) he cared more about the patient than the immediate perception of status loss. (I wouldn't care whether that just meant his thought out anticipation of future status loss for a failed operation overrode his immediate social instincts.)
That isn't an interview, it's a test. Tests are extremely useful. IQ tests are an excellent predictor of job performance, maybe the best one available. Regardless, IQ tests are usually de facto illegal in the US due to disparate impact.
I put interview in quotes because they called it an interview. Speaking broadly enough, all interviews are tests, but most are unstructured and not very good at examining the relevant predictor variables. All tests are of course not necessarily interviews, but the part where they had applicants explain their processes in layman's terms might qualify it, at least if you're generous with your definitions.
Of course, it's certainly unclear if not outright incorrect to call it an interview, but that was their choice; possibly they felt that subjecting applicants to a "test" rather than an "interview" projected a less positive image.
I don't think it's fair, as his job is not being an interviewer, but perhaps hiring smart people we can benefit from.