The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying "It's not a stone!" Does that feel like an explanation? No? Then neither should saying "It's a thinking machine!"
It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm". There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process. "Thinking about" is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.
The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts - there's no detailed internal model to manipulate. Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.
And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:
- Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
- After: The AI is going to take over the world by inventing nanotechnology.
- Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
- After: A friendly AI is going to extrapolate the coherent volition of humanity.
- Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.
Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:
- Before: The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- After: The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- Before: Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
- After: Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.
Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?
"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.
Compare "intelligent" to "fast".
I say "The cheetah will win the race, because it is very fast."
This has some explanatory power. It can distinguish between various reasons a cheetah might win a race: maybe it had a head start, or its competitors weren't trying very hard, or it cheeted. Once we say "The cheetah won the race because it was fast" we know more than we did before.
The same is true of "General Lee won his battles because he was intelligent". It distinguishes the case in which he was a tactical genius from the case where he just had overwhelming numbers, or was very lucky, or had superior technology. So "intelligent" is totally meaningful here.
None of these are a lowest-level explanation. We can further explain a cheetah's fast-ness by talking about its limb musculature and its metabolism and so on, and we can further explain General Lee's intelligence by talking about the synaptic connections in his brain (probably). But we don't always need the lowest possible level of explanation; no one gets angry because we don't explain World War I by starting with "So, there were these electrons in Gavrilio Princip's brain that got acted upon by the electromagnetic force..."
A Mysterious Explanation isn't just any time you use a non-lowest-level explanatory word. It's when you explain something on one level by referring to something on the same level.
Just as it is acceptable to say "General Lee won the battle because he was intelligent", so it is acceptable to say "The AI would conquer Rome because it was intelligent".
(just as it is acceptable to say "cavalry has an advantage over artillery because it is fast")
In fact, in the context of the quote, we were talking about the difference between a random modern human trying to take over Rome, and an AI trying to take over modern civilization. The modern human's advantage would be in technology and foreknowledge (as if General Lee won his battles by having plasma rifles and knowing all the North's moves in advance even though he wasn't that good a tactician); the AI might have those advantages, but also be more intelligent.
+Karma. The "cheeted" pun would have earned it an upvote even if this wasn't as useful as it is.