New essay summarizing some of my latest thoughts on AI safety, ~3500 words. I explain why I think that some of the thought experiments that have previously been used to illustrate the dangers of AI are flawed and should be used very cautiously, why I'm less worried about the dangers of AI than I used to be, and what are some of the remaining reasons for why I do continue to be somewhat worried.
Backcover celebrity endorsement: "Thanks, Kaj, for a very nice write-up. It feels good to be discussing actually meaningful issues regarding AI safety. This is a big contrast to discussions I've had in the past with MIRI folks on AI safety, wherein they have generally tried to direct the conversation toward bizarre, pointless irrelevancies like "the values that would be held by a randomly selected mind", or "AIs with superhuman intelligence making retarded judgments" (like tiling the universe with paperclips to make humans happy), and so forth.... Now OTOH, we are actually discussing things of some potential practical meaning ;p ..." -- Ben Goertzel
The classifier isn't breaking down - it was trained to do well across the entire training set using a small amount of computation for each inference and a reasonable (larger) amount of computation for training.
Human's fastest recognition capability still takes 100 ms or so, and operating in that mode (rapid visual presentation), human inference accuracy is considerably less capable than modern ANNs - which classify using less time and also around 1000x less neurons/synapses.
I would bet that humans often make similar mistakes in fast recognition. And even if humans don't make this specific mistake, it doesn't matter because they make more total mistakes in other categories.
The fact that humans can do better given considerably more time and enormously more neural resources is hardly surprising (involving more complex multi-step inference steps).
Also, the ImageNet training criterion is not really a good match for human visual intuitions. It assigns the same penalty for mistaking a dog for a cat as it does for mistaking two closely related species of dogs. Humans have a more sensible hierarchical error allocation. This may be something that is relatively easy to improve low-hanging fruit for ANNs, not sure - but someone is probably working on that if it hasn't already been done.
This doesn't seem right, assuming that "considerably less capable" means "considerably worse accuracy at classifying objects not drawn from ImageNet". Do you have a study in mind that shows this? In either case, I don't think this is strong enough to support the claim that the classifier isn't breaking down --- it's pretty clearly making mistakes where humans would find the answer obvious. I don't think that saying that the ANN answers more quickly is a very strong defense.