In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.

In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:

Note that Greg Egan seems to explicitly believe the reverse - that humans can understand anything understandable - which explains a lot.

An interview with Greg Egan in 2009 confirmed this to be true:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':

The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails).  After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata.  But in the latter case, we know the intuition is mistaken.  There is a ceiling to computational expressive power.  Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster.

An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents that are an effect of natural evolution.

Is it reasonable to assume that such quantum leaps are the rule, based on a single case study? Are there other animals that are vastly more intelligent than their immediate predecessors?

What reason do we have to believe that a level above that of a standard human, that is as incomprehensible to us as higher mathematics is to chimps, does exist at all? And even if such a level is possible, what reason do we have to believe that artificial general intelligence could consistently uplift itself to a level that is incomprehensible to its given level?

To be clear, I do not doubt the possibility of superhuman AI or EM's. I do not doubt the importance of "friendliness"-research and that it will have to be solved before we invent (discover?) superhuman AI. But I lack the expertise to conclude that there are levels of comprehension that are not even fathomable in principle.

In Complexity and Intelligence, Eliezer wrote:

If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.

If we were able to specify the laws of physics and one of the effects of their computation would turn out to be superhuman intelligence that is incomprehensible to us, what would be the definition of 'incomprehensible' in this context?

I can imagine quite a few possibilities of how a normal human being can fail to comprehend the workings of another being. One example can be found in the previously mentioned article by Scott Aaronson:

Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow.  But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.

Mr. Aaronson also provides another fascinating example in an unrelated post ('The T vs. HT (Truth vs. Higher Truth) problem'):

P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.

Those two examples provide evidence for the possibility that even beings who are fundamentally on the same level might yet fail to comprehend each other.

An agent might simply be more knowledgeable or lack certain key insights. Conceptual revolutions are intellectually and technologically enabling to the extent that they seemingly spawn quantum leaps in the ability to comprehend certain problems.

Faster access to more information, the upbringing, education, or cultural and environmental differences and dumb luck might also intellectually remove agents with similar potentials from each other to an extent that they appear to reside on different levels. But even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.

As mentioned by Scott Aaronson, the ability to think faster, but also the possibility to think deeper by storing more data in one's memory, might cause the appearance of superhuman intelligence and incomprehensible insight.

Yet all of the above merely hints at the possibility that human intelligence can be amplified and that we can become more knowledgeable. But with enough time, standard humans could accomplish the same.

What would it mean for an intelligence to be genuinely incomprehensible? Where do Eliezere Yudkowsky and Greg Egan disagree?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 12:30 AM

You know how there are people who, even though you could train them to carry out the steps of a Universal Turing Machine, you can't manage to teach them linear algebra, so there are problems they can't even represent compared to people who know linear algebra? I can't exhibit to you specifically what it is for obvious reasons, but there's going to be lots of stuff like that where a human brain just can't grok it, even though - like a sufficiently well-trained dog - we could be trained to carry out the operations of a UTM that did grok it, given infinite time and paper. You could train a chimp to simulate a human brain given infinite time and paper, I've little doubt. So what? There was still a huge jump in qualitative ability.

A singularity may be "comprehensible" in the sense that if I spent 10^100 times the lifetime of the universe writing in an unlimited supply of notebooks, I would eventually produce a pile of 10^10 notebooks containing something that might be called an understanding. But I can't do that, and saying that I could do it "in principle" seems awfully disingenuous.

The real observation here seems to be that intelligence falls on a continuous spectrum with computing and storage. But that doesn't mean there aren't tools to use computation and storage more efficiently which require much greater intelligence to discover. Nor does it imply that there are no phase transitions on that spectrum. There could even be interesting things which require more computation and storage to understand than exists in the entire universe.

A singularity may be "comprehensible" in the sense that if I spent 10^100 times the lifetime of the universe writing in an unlimited supply of notebooks, I would eventually produce a pile of 10^10 notebooks containing something that might be called an understanding.

I think Hutter's work is about how "compression is intelligence". I can somewhat follow this post, but not on a deep technical level.

I just finished Diaspora, and realized that everything of Egan's I've read shares the fundamental assumption discussed here: that in the space of intelligence, there's "down" and there are infinitely many dimensions of "sideways" from where we are, but there's no meaningful "up".

But IMO Occam's Razor distinguishes strongly between the following two hypotheses:

  1. There is a unique level of general intelligence, and it happens to be identical to the level required for hominids to build an agricultural civilization.

  2. There is a bigger hierarchy of general intelligence, and hominids just got smarter until they crossed the "agricultural civilization" threshold.

Apart from theoretical commitments, the first seems to entail a contingent miracle that the second doesn't.

I think there are things that individual humans will never understand because we have finite lifespans and finite memory capacity. Anything that requires more storage space to understand than a human brain possesses will never be completely understood by a single unaided human. We might be able to build a system that includes at least one human that does understand it (in the style of Searle's Chinese Room); after all, a human with enough time, enough paper, and enough pencils can indeed execute any Turing machine, but I wouldn't want to claim that "I" understand everything that the program I'm running does.

Following both jimrandomh and MixedNuts, I think perhaps the disagreement could be expressed in terms of resource normalized learning curves.

A learning curve can be generated for a single domain and a single person, and the datasets that support them allow one to compare different people's ability to learn and the ease or difficulty of different domains. Here are good examples. I've actually seen these produced and compared in a startup and used to identify whose process should be mined for lessons to teach other analysts, and seeing the graphs was kind of surprising to me because I had expected them to be noisy and abstract, but using a moving average to smooth things out (or looking at day to day performance over several weeks) but they showed S curves as people exponentially got the hang of things with early insights allowing greater insights and then hit diminishing returns as the marginal utility of insights started to decrease. Also, there were obvious and visible between-person differences in the curves themselves.

Implicitly, all of those curves were assumed to have the same X-axis which was "human-brain-hours" or something like that. Human brains were assumed to be equivalent (at least they had the same startup burn rate per hour...) and so only the hours and days spent learning were being considered.

So imagine we work out a way to normalize processing power in terms of theoretically inter-convertable input to a physical computing process like transistors or joules or some such. You might be able to normalize learning curves between different systems using something relatively universal, like total energy expended on learning so far, or computing-element-gram-seconds or some such.

With such a measure you might be able to vividly see that a trillion joules spent learning a very hard task would produce vastly better results for any fixed resources you allocated to a "fundamentally smarter" process (assuming a task where the learning curve wouldn't flatten out too fast to make learning differences moot). At a certain point, especially if you're comparing processes with different starting points on the curve at "time=0", it might be pointless to allocate learning jobs to certain learning systems given time and resource constraints, and then we might say that the learning systems that weren't "worth" assigning a learning task were "categorically different"... but figuring out this categorical difference in practice would involve a lot of features of the context, like the opportunity costs implied by the number and quality of alternatives, the task you're considering, hard deadlines, and perhaps "throttling issues" where each system in the learning economy's performance per resource per time may become sub-linear in resources beyond a certain point. On throttling issues, Jacob Cannel has interesting thoughts based on the physics of computation.

No - the aliens are slower (think slowly, progress slowly because they're stupider), but can understand as much as humans given enough time. This is the whole point!

I don't follow what you're trying to communicate. The story isn't about aliens and humans, it's about an AI in a box.

The point is that aliens or an AI don't need to be qualitatively different to be incomprehensible. One Einstein is incomprehensible to most people at 1x human speed. thousands of einsteins at 1000x speed would be.....

Edit: Turns out I misunderstood Greg Egan, and probably Eliezer Yudkowsky. What I thought was Egan's position is Aaronson's unless I misunderstood him too.

Paraphrase of Greg Egan's position (if I and XiXiDu understand correctly): "Given enough time, humans can understand anything. In practice we still get squashed by AIs, since they're much faster, but slow them down and we're equals."

Paraphrase of Eliezer Yudkowsky's position (same disclaimer): "There are things that humans simply cannot understand, ever, no matter how long it takes, but that other minds can understand." (I'm not sure what happens if you brute-force insightspace.)

arguments about the human mindspace in toto are silly at this juncture in our understanding.

I think that your impressions are at least implicitly inaccurate, unless your quote marks are actually indicating quotes I haven't seen. (If not, perhaps you should paraphrase in a way that doesn't look like direct quoting?) Greg Egan thinks that AIs are not a problem even considering (and dismissing as impossible?) their speed advantage, as far as I can tell. So, practically speaking, he thinks this uFAI alarmism is wrong and maybe contemptible, again as far as I can tell. Eliezer's impression might be that there are things humans can never understand, but if so that's probably because the word 'human' typically refers to a structure that is defined in many ways by its boundedness. That is, maybe a human could follow a superintelligent argument if the human was upgraded with a Jupiter brain, but calling such a human a human might be stretching definitions. But maybe Eliezer does in fact have deeper objections, I'm not sure.

I don't see anything in the story which I'd expect Egan to disagree with, so I'm not quite sure how it's relevant here.

OP asks what does it mean for something to be incomprehensible. My point was that we don't need to resort to mysterious, non-answerable hypotheticals about rifts in mind space to answer the question.

This is a cool article and it made me think, so this is just a minor nitpick: I know "quantum leap" is commonly used to mean a large jump, but wouldn't that be a very tiny jump, if "quantum" is used in its usual (correct) sense?

[-]FAWS13y10

This is probably a bit out of place here, but I don't think humans are actually smarter than chimps to a surprising extent. Humans have language and (presumably) related capacity for various kinds of symbol manipulation, which are very powerful tools, and all sorts of powerful cultural tools built on top of that, but other than that I don't think humans are so terribly much smarter than chimps, not much more so than chimps are smarter than dogs or dogs smarter than cows at least.

[-][anonymous]13y00

What about the humans below the 20th intelligence percentile and their abilities to understand various concepts? It seems to be that even within humanity there are (currently) unbridgeable chasms. It's also evidence for the great importance of smaller architectural tweaks in understanding.

I think Egan is right. Our ability to account for new phenomena comes from language and other extensible representational systems such as mathematics. As long as we can keep inventing representations and rules for using them we can account for anything that can be accounted for. The only limits that apply are the limits of such systems and not our own limitations.

The emptiness of the claim that there can be something that is to humans as we are to dogs can be seen by following it in the other direction. Just what exactly is to dogs as dogs are to humans? Fish? Do fish find dogs incomprehensible? The analogy implies a progression of qualitative breaks from which you can extrapolate another qualitative break, which is contradictory. Comprehension separates us from other species but not because they find us incomprehensible but rather because the concept of comprehension doesn't apply to them at all.

Comprehension separates us from other species but not because they find us incomprehensible but rather because the concept of comprehension doesn't apply to them at all.

Fair enough, but not all humans are created equal. Suppose we consider an adult's with Down's Syndrome or some other mental retardation; I trust you'll agree that the concept of comprehension applies to him. Is it obvious that, given enough time, he will be able to grasp quantum field theory? Not to me; it is a level of abstraction he just doesn't operate on.

Somebody might be able to grasp quantum field theory in principle but not in practice - i.e., a person with a short attention span might never be able to concentrate long enough to master quantum field theory even though they could do so in principle. I think most human differences are like this. In the same respect, the human race might never be able to grasp theory x although they theoretically could do so because of some attribute of human psychology that interferes, such as a collective short attention span or a capacity for self-destruction. In that sense, an alien or AI that has general intelligence and is more able to concentrate or is less self-destructive would have an advantage over us. But this doesn't alter Egan's point.

The difference between "I can grasp this in principle but my attention span is not long enough" and "I am not intelligent enough to grasp this" escapes me. A limit is a limit; either you can actually grasp the theory or you can't. The specific bottleneck in your intelligence, and I very much include attention span in that, is not relevant.

[-][anonymous]13y10

A certain skill set is required in order to employ mental prostheses such as written-down equations, computer programs, and the like. Without the prosthesis of writing, it would likely have been impossible to eventually prove Fermat's Last Theorem. Without a computer program to do an exhaustive search of possible cases, it might have been humanly impossible to prove the four-color theorem. Ink on paper acts as a kind of memory, and also acts as a kind of attention span, or as an attention span prosthesis.

There may not be any upper bound on what can be remembered, on the length of one's effective attention span, once one is using these and other prostheses.

But it requires a certain minimal skill set in order to begin to make use of these external prostheses. Many humans, especially those who are born with conditions that limit their native intelligence, may be unable to acquire that skill set.

So that threshold that Egan writes about may lie somewhere inside the human race, with some humans above the threshold, and others below. I.e., the threshold he refers to here:

I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning.

Abstract reasoning is built on the manipulation of symbols, so it will be limited to those humans who have the ability to manipulate symbols with some degree of reliability - enough to get them from one equation to the next. But once a human is able to get from one equation to the next, he can in principle follow (line by line) a proof of arbitrary length, even with a finite memory that can hold only a few equations at one time.

Egan is specifically addressing the in principle ability to understand something given time and inclination. He says, "I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools." A short attention span will stop a person from putting in the necessary time to understand something. If a person had the ability to learn quantum field theory but couldn't afford to buy the textbooks, would you still say "a limit is a limit"?

(In a perfect world The Singularity Is Far would've been a Robin Hanson post...)

When people talk about cognitive hierarchies and UTMs and how humans are the first example of an entity on the smarter side of the general intelligence Rubicon, it's not obvious that someone like Eliezer would disagree. Or to speak more easily, I would not disagree, though with caveats that the metaphysics of this abstraction called 'computation' still seem confusing unto me. Unfortunately my weak impression is that the people who talk about such things seem to think these fringe Singularity people would disagree, because they hear pseudo-Vingean claims about unpredictability and the like. I'd imagine many of those pseudo-Vingean 'predictions' pattern match to that idea everyone has when they're eight years old about some kind of superconsciousness, the equivalent of universality on top of universality, a triumph over Goedel. They probably also pattern match the claims to even sillier ideas.

Again this is a guess, but it is non-negligibly likely that folk at least similar to Greg Egan have an implicit model of Singularitarian folk (whose diverse members and positions will unfortunately get lumped together to some rather non-negligibly biased extent) as having differing and very probably wrong intuitions about theoretical computer science, instead of having differing but at least plausible beliefs about rates/bursts of technological progress or problem solving ability as measured by biological subjective time. That said, if you asked Egan if he explicitly thinks that Eliezer explicitly thinks that super-Turingness-like-stuff is probable, I don't think he'd say yes, so it seems probable that my model is at least somewhat wrong.

I think that Aaronson's comments are correct, though the human thought serial speed-up thing is perhaps a misleading example for this post since the sort of singularity Eliezer et al are interested in is not really about serial speed-ups themselves so much as using speed-ups to search for more speed-ups among other things. Aaronson's P versus NP example is an okay theoretical example of an insight that smart Singularitarians might talk about as in-hindsight practically incompressible but still powerful enough to 'change the nature of the optimization game' as I think Eliezer put it.

That humans might in principle follow any argument a superintelligence could put forth up to the Goedelian limit is maybe true if you're willing to really really stretch the definition of human. This implies next to nothing about whether or not hardware-enhanced humans or even ems can stay ahead of de novo Bayesian seed AIs or even hacked-together but still non-biologically-originated AIs, whose architectures are wayyyyy more flexible and conducive to software improvements. I have a hard time coming up with a decent counterargument Greg Egan could make here conditional on his accepting the practical possibility of engineering AGI. I'm probably missing nuances of his position. I wish he could have a structured discussion with Eliezer. (Bloggingheads is too much like a debate, no one goes in expecting to change their mind.)

I agree that the chimpanzee-to-human phase transition example is potentially misleading unless there's something I'm missing. If you've seen it used in an SIAI paper in a way that doesn't mention the possible narrow-to-general intelligence phase transition argument, please link to it.