So I'm interpreting your point as being "What if what we think of when we say 'general intelligence' isn't really all that useful in different domains, but we keep treating it as if it were the kind of thing that could constantly win battles or conquer Rome or whatever?" Perhaps then it was a mistake to talk about generals in battle, as your theory is that there may be an especially victorious general, but his fortune may be due more to some specific skill at tactics than his general intelligence?
I guess my belief in the utility of general intelligence (you cited an article of mine arguing against huge gains from technical rationality, which I consider very different; here I'm talking about pure IQ) would come from a comparison with subnormal intelligence. A dog would make a terrible general. To decreasing degrees, so too would a chimp, a five year old child, a person with Down's Syndrome, and most likely a healthy person with an IQ of 75. These animals and people would also, more likely than not, be terrible chess players, mathematicians, writers, politicians, and chefs.
This is true regardless of domain-specific training: you can read von Clausewitz's On War to a dog and it will just sit there, wagging its tail. You can read it to a person with IQ 75, and most of the more complicated concepts will be lost. Maybe reading On War would allow a person with a few dozen IQ point handicap to win, but it's not going to make a difference across a gulf the size of the one between dogs and humans.
Humans certainly didn't evolve a separate chess playing module, or a separate submarine tactics module, so we attribute our being able to wipe the floor with dogs and apes in chess or submarine warfare to some kind of "high general intelligence" we have and they don't.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar. Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word "intelligence" to explain why, so I would expect Omega to be able to beat a human for the same reason.
Is that a little closer to the point of your objection?
First, I'd like to make sure that you understand I'm trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered "almost certainly false" in this community. I'm arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on "generic cognitive efficacy") are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully spec...
The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying "It's not a stone!" Does that feel like an explanation? No? Then neither should saying "It's a thinking machine!"
It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm". There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process. "Thinking about" is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.
The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts - there's no detailed internal model to manipulate. Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.
And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:
Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:
Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?
"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.