I sat on the floor with my dog and taught him to roll over in a few sessions. Each session was five minutes, and he had it by the end of each one.
The third session was a bit different from the first two because I sat in a chair. I began by making a whirling motion with my hand and saying "roll over". He quickly shuffled a few feet to the right, crashed hard into the wall, and fell over. He looked confusedly at the wall, and then at me - the wall for attacking him, and me for withholding his reward of food.
He had, after all, performed the trick that I had taught him - pointing his head at my crotch and shuffling to the right, not stopping until he flipped 360 degrees.
That I intended for him to be learning to roll over didn't matter - this is reinforcement learning.
Ed Fredkin has since sent me a personal email:
By the way, the story about the two pictures of a field, with and without army tanks in the picture, comes from me. I attended a meeting in Los Angeles, about half a century ago where someone gave a paper showing how a random net could be trained to detect the tanks in the picture. I was in the audience. At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the "neural net" had merely trained itself to recognize the difference between a bright picture and a dim picture.
That's pretty funny, though it appears that the cause may not have been that, say, the engineers trained the face tracking software using mostly white people.
Amusing and perhaps inconvenient for a few users before it came to the attention on the manufacturer but I don't think that's really "serious damage".
It's almost certainly not the actual source of the "parable", or if it is the story was greatly exaggerated in its retelling (admittedly not unlikely), but this may well be the original study (and is probably the most commonly-reused data set in the field) and this is a useful overview of the topic.
Does that help?
Except "November Fort Carson RSTA Data Collection Final Report" was released in 1994 covering data collection from 1993, but the parable was described in 1992 in the "What Artificial Experts Can and Cannot Do" paper.
The earliest reference to the parable that I can find is in this paper from 1992. (Paywalled, so here's the relevant page.) I also found another paper which attributes the story to this book, but the limited Google preview does not show me a specific discussion of it in the book.
Here's the full version of "What Artificial Experts Can and Cannot Do" (1992): http://www.jefftk.com/dreyfus92.pdf It has:
... consider the legend of one of connectionism's first applications. In the early days of the perceptron ...
There's also https://neil.fraser.name/writing/tank/ from 1998 which says the "story might be apocryphal", so by that point it sounds like it had been passed around a lot.
In the "Building Neural Networks" book, the bottom of page 199 seems to be about "classifying military tanks in SAR imagery". It goes on to say it is only interested in "tank" / "non-tank" categories.
But it also doesn't look like it's a version of this story. That section of the book is just a straight ahead "how to distinguish tanks" bit.
Every time you've missed an important email because of a spam filter false positive.
This was also discussed in Magical Categories. I don't know the source of the parable, though.
Dataset bias is a huge topic in computer vision at the moment. An excellent (and humorous) overview is given by Efros and Torralba: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5995347&tag=1.
Pakistan claims that U.S. predator drones routinely bomb outdoor weddings, mistaking them for Al-Qaeda rallies. I couldn't immediately find an authoritative source, but if there's any truth to it this sort of phenomenon is probably to blame.
The drones aren't giving much in the way of self-direction. I'd guess this is more due to human error than anything else.
I'm reminded of one of your early naively breathless articles here on the value of mid-80s and prior expert systems.
Why don't you write a post on how it is naive? Do you actually know something about practical application of these methods?
Yes, if experts say that they use quantifiable data X, Y, and Z to predict outcomes, that simple algorithms beat them on only that data might not be important if the experts really use other data. But there is lots of evidence saying that experts are terrible at non-quantifiable data, such as thinking interviews are useful in hiring. Tetlock finds that ecologically valid use of these trivial models beats experts in politics.
this one:
http://lesswrong.com/lw/3gv/statistical_prediction_rules_outperform_expert/
When based on the same evidence, the predictions of SPRs are at least as reliable as, and are typically more reliable than, the predictions of human experts for problems of social prediction.
Hmm yes, 'same evidence'.
In Artificial Intelligence as a Negative and Positive Factor in Global Risk, Yudkowsky uses the following parable to illustrate the danger of using case-based learning to produce the goal systems of advanced AIs:
I once stumbled across the source of this parable online, but now I can't find it.
Anyway, I'm curious: Are there any well-known examples of this kind of problem actually causing serious damage — say, when a narrow AI trained via machine learning was placed into a somewhat novel environment?