You know how there are people who, even though you could train them to carry out the steps of a Universal Turing Machine, you can't manage to teach them linear algebra, so there are problems they can't even represent compared to people who know linear algebra? I can't exhibit to you specifically what it is for obvious reasons, but there's going to be lots of stuff like that where a human brain just can't grok it, even though - like a sufficiently well-trained dog - we could be trained to carry out the operations of a UTM that did grok it, given infinite time and paper. You could train a chimp to simulate a human brain given infinite time and paper, I've little doubt. So what? There was still a huge jump in qualitative ability.
In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.
In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:
An interview with Greg Egan in 2009 confirmed this to be true:
The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':
An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents that are an effect of natural evolution.
Is it reasonable to assume that such quantum leaps are the rule, based on a single case study? Are there other animals that are vastly more intelligent than their immediate predecessors?
What reason do we have to believe that a level above that of a standard human, that is as incomprehensible to us as higher mathematics is to chimps, does exist at all? And even if such a level is possible, what reason do we have to believe that artificial general intelligence could consistently uplift itself to a level that is incomprehensible to its given level?
To be clear, I do not doubt the possibility of superhuman AI or EM's. I do not doubt the importance of "friendliness"-research and that it will have to be solved before we invent (discover?) superhuman AI. But I lack the expertise to conclude that there are levels of comprehension that are not even fathomable in principle.
In Complexity and Intelligence, Eliezer wrote:
If we were able to specify the laws of physics and one of the effects of their computation would turn out to be superhuman intelligence that is incomprehensible to us, what would be the definition of 'incomprehensible' in this context?
I can imagine quite a few possibilities of how a normal human being can fail to comprehend the workings of another being. One example can be found in the previously mentioned article by Scott Aaronson:
Mr. Aaronson also provides another fascinating example in an unrelated post ('The T vs. HT (Truth vs. Higher Truth) problem'):
Those two examples provide evidence for the possibility that even beings who are fundamentally on the same level might yet fail to comprehend each other.
An agent might simply be more knowledgeable or lack certain key insights. Conceptual revolutions are intellectually and technologically enabling to the extent that they seemingly spawn quantum leaps in the ability to comprehend certain problems.
Faster access to more information, the upbringing, education, or cultural and environmental differences and dumb luck might also intellectually remove agents with similar potentials from each other to an extent that they appear to reside on different levels. But even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.
As mentioned by Scott Aaronson, the ability to think faster, but also the possibility to think deeper by storing more data in one's memory, might cause the appearance of superhuman intelligence and incomprehensible insight.
Yet all of the above merely hints at the possibility that human intelligence can be amplified and that we can become more knowledgeable. But with enough time, standard humans could accomplish the same.
What would it mean for an intelligence to be genuinely incomprehensible? Where do Eliezere Yudkowsky and Greg Egan disagree?