In Artificial Intelligence as a Negative and Positive Factor in Global Risk, Yudkowsky uses the following parable to illustrate the danger of using case-based learning to produce the goal systems of advanced AIs:
Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set - output "yes" for the 50 photos of camouflaged tanks, and output "no" for the 50 photos of forest. This did not ensure, or even imply, that new examples would be classified correctly. The neural network might have "learned" 100 special cases that would not generalize to any new problem. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.
It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.
I once stumbled across the source of this parable online, but now I can't find it.
Anyway, I'm curious: Are there any well-known examples of this kind of problem actually causing serious damage — say, when a narrow AI trained via machine learning was placed into a somewhat novel environment?
The earliest reference to the parable that I can find is in this paper from 1992. (Paywalled, so here's the relevant page.) I also found another paper which attributes the story to this book, but the limited Google preview does not show me a specific discussion of it in the book.
Expanded my comments into a post: http://www.jefftk.com/p/detecting-tanks