In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.
In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:
Note that Greg Egan seems to explicitly believe the reverse - that humans can understand anything understandable - which explains a lot.
An interview with Greg Egan in 2009 confirmed this to be true:
… I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':
The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails). After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata. But in the latter case, we know the intuition is mistaken. There is a ceiling to computational expressive power. Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster.
An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents that are an effect of natural evolution.
Is it reasonable to assume that such quantum leaps are the rule, based on a single case study? Are there other animals that are vastly more intelligent than their immediate predecessors?
What reason do we have to believe that a level above that of a standard human, that is as incomprehensible to us as higher mathematics is to chimps, does exist at all? And even if such a level is possible, what reason do we have to believe that artificial general intelligence could consistently uplift itself to a level that is incomprehensible to its given level?
To be clear, I do not doubt the possibility of superhuman AI or EM's. I do not doubt the importance of "friendliness"-research and that it will have to be solved before we invent (discover?) superhuman AI. But I lack the expertise to conclude that there are levels of comprehension that are not even fathomable in principle.
In Complexity and Intelligence, Eliezer wrote:
If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.
If we were able to specify the laws of physics and one of the effects of their computation would turn out to be superhuman intelligence that is incomprehensible to us, what would be the definition of 'incomprehensible' in this context?
I can imagine quite a few possibilities of how a normal human being can fail to comprehend the workings of another being. One example can be found in the previously mentioned article by Scott Aaronson:
Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow. But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.
Mr. Aaronson also provides another fascinating example in an unrelated post ('The T vs. HT (Truth vs. Higher Truth) problem'):
P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.
Those two examples provide evidence for the possibility that even beings who are fundamentally on the same level might yet fail to comprehend each other.
An agent might simply be more knowledgeable or lack certain key insights. Conceptual revolutions are intellectually and technologically enabling to the extent that they seemingly spawn quantum leaps in the ability to comprehend certain problems.
Faster access to more information, the upbringing, education, or cultural and environmental differences and dumb luck might also intellectually remove agents with similar potentials from each other to an extent that they appear to reside on different levels. But even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.
As mentioned by Scott Aaronson, the ability to think faster, but also the possibility to think deeper by storing more data in one's memory, might cause the appearance of superhuman intelligence and incomprehensible insight.
Yet all of the above merely hints at the possibility that human intelligence can be amplified and that we can become more knowledgeable. But with enough time, standard humans could accomplish the same.
What would it mean for an intelligence to be genuinely incomprehensible? Where do Eliezere Yudkowsky and Greg Egan disagree?
Edit: Turns out I misunderstood Greg Egan, and probably Eliezer Yudkowsky. What I thought was Egan's position is Aaronson's unless I misunderstood him too.
Paraphrase of Greg Egan's position (if I and XiXiDu understand correctly): "Given enough time, humans can understand anything. In practice we still get squashed by AIs, since they're much faster, but slow them down and we're equals."
Paraphrase of Eliezer Yudkowsky's position (same disclaimer): "There are things that humans simply cannot understand, ever, no matter how long it takes, but that other minds can understand." (I'm not sure what happens if you brute-force insightspace.)
I think that your impressions are at least implicitly inaccurate, unless your quote marks are actually indicating quotes I haven't seen. (If not, perhaps you should paraphrase in a way that doesn't look like direct quoting?) Greg Egan thinks that AIs are not a problem even considering (and dismissing as impossible?) their speed advantage, as far as I can tell. So, practically speaking, he thinks this uFAI alarmism is wrong and maybe contemptible, again as far as I can tell. Eliezer's impression might be that there are things humans can never understand, bu... (read more)