Quantitative limitations amount to qualitative limitations in this case.
The only truly universal TM has infinite memory and is infinitely programmable. Neither is true of humans.
We can't completely wipe and reload our brains, so we might be forever constrained by some fundamental hardcoding , something like Chomskyan innate linguistic structures , or Kantian perceptual categories.
And having quantitative limitations puts a ceiling on which concepts and theories we can entertain. Which is effectively a qualitative limit.
AIs are also finite , although they might have less restrictive limits.
There's no jump to universality because there is no jump to infinity.
Turing completeness misses some important qualitative properties of what it means for people to understand something. When I understand something I don't merely compute it, I form opinions about it, I fit it into a schema for thinking about the world, I have a representation of it in some latent space that allows it to be transformed in appropriate ways, etc.
I could, given a notebook of infinite size, infinite time, and lots of drugs, probably compute the Ackermann function A(5,5). But this has little to do with my ability to understand the result in the sense of being able to tell a story about the result to myself. In fact, there are things I can understand without actually computing, so long as I can form opinions about it, fit it into a picture of the world, represent it in a way that allows for transformations, etc.
The quotes aren't about Turing completeness. What you wrote is irrelevant to the quoted material.
For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’
Sure, but then the understanding must lie in the combined human-pencil system, not the human brain alone, just as a human slowly following instructions in (forgive my use of this thought experiment, but it is an extension of the same idea) Searle's Chinese room doesn't understand Mandarin, even if the instructions they're executing do. An AI's CPU is not itself conscious, even if the AI is. The key in Einstein's case is that after writing everything down as a memory aid and an error-correcting mechanism, the important points the pencil made are stored and processed in his brain, and he can reason further with them. You could show me a Matrioshka brain simulation a human with planck-scale precision, and prove to me it did so, but even if I built the thing myself I still wouldn't understand it in the way I usually use the word "understand." Like in thermodynamics, at some point more is qualitatively different.
Now, if you very slowly augmented my brain with better hardware (and/or wetware), such that my thoughts really interfaced seamlessly across my biological evolved brain and any added components used as aids, then I started to consider those part of my mind instead of external tools. So in that sense, yes, future-me could come to understand anything.
That just doesn't mean they could come back in time and explain it in a way current-me could grasp, any more than I could meaningfully explain the implications of group theory for semiconductor physics to kindergarden-me (early-high-school-me could probably follow it with some extra effort, though). Kindergarden-me knew enough basic arithmetic and could have learned the symbol manipulations needed for Boolean logic (I think Scratch and Scratch Jr are proof enough that this is something young kids are capable of if it is presented correctly), so there's no computational operation he couldn't perform. He'd just have no idea why he'd be doing any of it, or how it related to anything else, and if he forgot it he couldn't re-derive it and might not even notice the loss. It would not be truly part of him.
In The Beginning of Infinity David Deutch claims that the world is explicable and that human beings can explain anything that can be explained (Chapter 3):
Deutsch claims that an AI would be a universal explainer (Chapter 7):
So according to Deutsch there is no qualitative distinction between an AI and a human being. Comments?