In Artificial Intelligence as a Negative and Positive Factor in Global Risk, Yudkowsky uses the following parable to illustrate the danger of using case-based learning to produce the goal systems of advanced AIs:
Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set - output "yes" for the 50 photos of camouflaged tanks, and output "no" for the 50 photos of forest. This did not ensure, or even imply, that new examples would be classified correctly. The neural network might have "learned" 100 special cases that would not generalize to any new problem. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.
It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.
I once stumbled across the source of this parable online, but now I can't find it.
Anyway, I'm curious: Are there any well-known examples of this kind of problem actually causing serious damage — say, when a narrow AI trained via machine learning was placed into a somewhat novel environment?
It's almost certainly not the actual source of the "parable", or if it is the story was greatly exaggerated in its retelling (admittedly not unlikely), but this may well be the original study (and is probably the most commonly-reused data set in the field) and this is a useful overview of the topic.
Does that help?
Except "November Fort Carson RSTA Data Collection Final Report" was released in 1994 covering data collection from 1993, but the parable was described in 1992 in the "What Artificial Experts Can and Cannot Do" paper.