Why don't you write a post on how it is naive? Do you actually know something about practical application of these methods?
Yes, if experts say that they use quantifiable data X, Y, and Z to predict outcomes, that simple algorithms beat them on only that data might not be important if the experts really use other data. But there is lots of evidence saying that experts are terrible at non-quantifiable data, such as thinking interviews are useful in hiring. Tetlock finds that ecologically valid use of these trivial models beats experts in politics.
In Artificial Intelligence as a Negative and Positive Factor in Global Risk, Yudkowsky uses the following parable to illustrate the danger of using case-based learning to produce the goal systems of advanced AIs:
I once stumbled across the source of this parable online, but now I can't find it.
Anyway, I'm curious: Are there any well-known examples of this kind of problem actually causing serious damage — say, when a narrow AI trained via machine learning was placed into a somewhat novel environment?