The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying "It's not a stone!" Does that feel like an explanation? No? Then neither should saying "It's a thinking machine!"
It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm". There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process. "Thinking about" is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.
The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts - there's no detailed internal model to manipulate. Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.
And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:
- Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
- After: The AI is going to take over the world by inventing nanotechnology.
- Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
- After: A friendly AI is going to extrapolate the coherent volition of humanity.
- Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.
Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:
- Before: The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- After: The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- Before: Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
- After: Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.
Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?
"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.
Upvoted and hopefully answered :-)
Specifically, I think you might be missing the halo effect, the fundamental attribution error, survivorship bias, and strategic signalling to gain access to power, influence, and money.
What is the nature of the property that the general would have a 93% chance of having? Is it a property you'd hypothesize was shared by about 7% of all humans in history? Is it shared by 7% of extant generals? What if the internal details of the property you hypothesize is being revealed are such that no general actually has it, even though some general always wins each battle? How would you distinguish between these outcomes? How many real full scale battles are necessary and how expensive are they to run to push P(at least one general has the trait) and P(a specific general has the trait|at least one general has the trait) close to 1 or 0?
XiXiDu titled his article "The Futility Of Intelligence". What I'm proposing is something more like "The Use And Abuse Of Appearances Of General Intelligence, And What Remains Of The Theory Of General Intelligence After Subtracting Out This Noise". I think that there is something left, but I suspect it isn't as magically powerful or generic as is sometimes assumed, especially around these parts. You have discussed similar themes in the past in less mechanistic and more personal, friendly, humanized, and generally better written forms :-)
This point is consonant with ryjm's sibling comment but if my suspicions stand then the implications are not simply "subtle and not incredibly useful" but have concrete personal implications (it suggests studying important domains is more important than studying abstractions about how to study, unless abstraction+domain is faster to acquire than the domain itself, and abstraction+abstraction+domain faces similar constraints (which is again not a particularly original insight)). The same suspicion has application to political discourse and dynamics where it suggests that claims of generic capacity are frequently false, except when precise mechanisms are spelled out, as with market pricing as a reasonably robust method for coordinating complex behaviors to achieve outcomes no individual could achieve on their own.
A roughly analogous issue comes up in the selection of "actively managed" investment funds. All of them charge something for their cognitive labor and some of them actually add value thereby, but a lot of it is just survivorship bias and investor gullibility. "Past performance is no guarantee of future results." Companies in that industry will regularly create new investment funds, run them for a while, and put the "funds that have survived with the best results so far" on their investment brochures while keeping their other investment funds in the background where stinkers can be quietly culled. Its a good trick for extracting rent from marks, but it's not the sort of thing that would be done if there was solid and simple evidence of a "real" alpha that investors could pay attention to as a useful and generic predictor of future success without knowing much about the context.
I have a strong suspicion, and I'd love this hunch to be proved wrong, that there's mostly no free lunches when it comes to epistemology. Being smart about one investment regime is not the same as being smart about another investment regime. Being a general and playing chess have relatively little cross-applicable knowledge. Being good at chess has relatively little in common with being good at the abstractions of game theory.
With this claim (which I'm not entirely sure of because its very abstract and hard to ground in observables) I'm not saying that AGI that implements something like "general learning ability in silicon and steel" wouldn't be amazing or socially transformative, I'm not saying that extreme rationality is worthless, its more like I'm claiming that its not magic, with a sub-claim that sometimes some people seem to speak (and act?) as though they think it might be magic. Like they can hand-wave the details because they've posited "being smarter" as an ontologically basic property rather than as a summary for having nailed down many details in a functional whole. If you adopt an implementation perspective, then the summary evaporates because the details are what remain before you to manipulate.
So I'm interpreting your point as being "What if what we think of when we say 'general intelligence' isn't really all that useful in different domains, but we keep treating it as if it were the kind of thing that could constantly win battles or conquer Rome or whatever?" Perhaps then it was a mistake to talk about generals in battle, as your theory is that there may be an especially victorious general, but his fortune may be due more to some specific skill at tactics than his general intelligence?
I guess my belief in the utility of general intell... (read more)