If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the "human vs. dog" comparison.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can "only" be faster. This is still very powerful if it's, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn't matter that we're Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it's pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That's without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that's as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs' output into self, etc. etc.
You say an AI might be "less biased" than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I'd guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn't be the same as that between humans and a dog, or even of the same type. It's a misleading comparison and maybe that's one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI's behavior (rather than its external resources etc). And that generalization is what we name "greater intelligence".
So in a sense the AI can "only" be faster.
And more reliable. Humans can't simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.
The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying "It's not a stone!" Does that feel like an explanation? No? Then neither should saying "It's a thinking machine!"
It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm". There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process. "Thinking about" is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.
The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts - there's no detailed internal model to manipulate. Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.
And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:
Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:
Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?
"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.