In Artificial Addition, Eliezer used the ability to do arithmetic as a metaphor for intelligence. I really like this essay. It's witty and enlightening. And yet I have to admit it aged not so well. Among several confused ways to think about artificial addition and by metaphor about artificial intelligence as well - he mentioned these:

  • "It's a framing problem - what 'twenty-one plus' equals depends on whether it's 'plus three' or 'plus four'.  If we can just get enough arithmetical facts stored to cover the common-sense truths that everyone knows, we'll start to see real addition in the network."
  • "But you'll never be able to program in that many arithmetical facts by hiring experts to enter them manually.  What we need is an Artificial Arithmetician that can learn the vast network of relations between numbers that humans acquire during their childhood by observing sets of apples."
  • "No, what we really need is an Artificial Arithmetician that can understand natural language, so that instead of having to be explicitly told that twenty-one plus sixteen equals thirty-seven, it can get the knowledge by exploring the Web."

Now we know that this approach to artificial intelligence actually works. LLMs are trained on a huge corpus of texts from the internet to learn the vast network of relations between concepts, which gives them the ability to understand natural language, and as a result they can perform well in a vast array of intelligence tasks. Ironically, they are still not very good at calculation, though.

What can we say about it in hindsight? What mistake in reasoning led to this bad prediction? Why did past Eliezer fail to anticipate LLMs? What lesson can we learn from it?

First of all, let's remind ourselves why past Eiezer's reasoning made sense. 

  • Understanding language is a proxy target. You can map mathematical facts to language and treat them in a roundabout way, but this is going to be less accurate then addressing them directly in the medium specifically optimized for them. 
  • Knowing a collection of facts satisfying a rule isn't the same as knowing that rule. One can get the rule from the collection of facts via induction, but this a separate intellectual ability that you will have to embed into your system. It's easier to figure out the rule yourself, as you already possess the ability to do induction and then embed the rule.
  • Addition is plain simpler than language. If you don't know how to make a system that can do addition, you won't be able to make one that understands language.

Now, in hindsight, I can see that this is where the metaphor breaks. Can you? I'll let you think yourself about it for a while

.

.

.

.

.

.

.

.

.

.

.

.

Abilities to do language, arithmetic, induction are all part of a vast holistic concept that we call "intelligence". Meanwhile, language and induction are not part of arithmetic. So, as a part is less complex than the whole, in terms of complexity we get something like this:

Arithmetic < Language < Intelligence

Arithmetic < Induction < Intelligence

And the road from language and induction to intelligence makes much more sense than from language and induction to arithmetic. And if all the knowledge of your civilization is encoded in language, including the rules of rationality itself, maybe this road is even one the best. 

When framed like this, and in hindsight, the mistake may look silly. It may seem as if Eliezer just used an obviously unfitting metaphor and we didn't notice it before, due to the halo effect. So the only lessons here are the fault of traductive reasoning and the dangers of trusting the authority. But I don't think it's the case. 

I suppose, Eliezer thought that intelligence is simpler than a buch of separate abilities that people put in a bundle category. Not literally as simple as arithmetics, but probably less complicated than language. That there is some core property, from which all of the abilities we associate with intelligence can be achieved. Some simple principle that can be expressed through the math of Bayesian reasoning. Was it the case, the metaphor would've been completelly on point.

It's not a priori clear whether it's easier to reduce language to intelligence or intelligence to language. We see them co-occur in nature. But which way does the causality point? It does seem that some level of intelligence is required to develop language. Are we actually carving reality along its natural joints when we categorise "language" as an element of a larger set "intelligence"?

The "intelligence to language" position isn't unreasonable. Actually, it still may be true! I'm not claiming that we've received a hard proof to the contrary. But we've got evidence. And we need to update on it. We live in the world where LLMs are possible. Where the road from language and inductive reasoning to intelligence seems clear. So let's investigate its premises and implications. If LLMs are possible, what else is?

New Comment
10 comments, sorted by Click to highlight new comments since:

Arithmetic is much simpler than learned capability of LLMs to do arithmetic, and learning it with LLMs doesn't explain it, LLMs are just another inexplicable thing capable of practicing it. Similarly, intelligence might be much simpler than learned capability of LLMs to exhibit intelligence, and certainly they don't offer significant clarity in understanding it.

Indeed. The existence of LLMs is not itself an answer to all the relevant questions, just a hint in an interesting direction. I'm going to continue this line of inquiry in future posts. 

I am confused why the existence of LLMs imply that past Elezier was wrong. I don't see how this follows from what you have written. Could you summarize why you think so?

Lets look at two competing theories: I<L and L<I

I<L means that a core property of intelligence is less complicated than the ability to understand language and that this ability, and other abilities that we understand as intellectual can be achieved through the core property. If I<L is true, Eliezer's metaphor between intelligence and arithmetics is correct as both are less complicated than language. If I<L is true, we would expect it to be easier to create an AI possessing the core property of intelligence, a simple mind, than creating an AI capable of language without having the core property of intelligence.

L<I means language is less complicated than intelligence, for instance, because intelligence is just a bundle term for lots of different abilities which we possess. If L<I is true, than Eliezer's metaphor doesn't work which naturally leads to faulty prediction. If L<I is true  language is definetely not AGI-complete and we would expect it to be easier to create a good language model than a generally intelligent AI.

Now we can observe that Eliezer's metaphor produced a bad prediction: despite following the route he deemed "dancing around confusion" we created LLMs before AI with the core intelligence property thus language does not appear to be AGI-complete. It's an evidence in favour of L<I.

I think the point of Yudkowsky's post wasn't particularly about feasibility of building things without understanding, it's instead about unfortunate salience of lines of inquiry that don't lead to deconfusion. If building an arithmetic-capable LLM doesn't deconfuse arithmetic, then this wasn't a way of understanding arithmetic, even if the project does succeed. Similarly with intelligence.

There is already existence of humans that are capable of all the things, built by natural selection without understanding and offering little deconfusion of the capabilities even from the inside, to humans themselves. So a further example of existence of LLMs isn't much evidence of anything in this vein.

It's not a priori clear whether it's easier to reduce language to intelligence or intelligence to language. We see them co-occur in nature. But which way does the causality point? It does seem that some level of intelligence is required to develop language. 

It's also not clear to me that humans developed intelligence and then language. The evolution of the two very plausibly happened in tandem, and if so, language could be fundamental to the construct of intelligence - even when we talk about mathematical IQ, the abilities are dependent on the identification of categories, which is deeply linguistic. Similarly, for visual IQ, the tasks are using lots of linguistic categories.

Yes. Moreover, the point I'm making with this post is that the existence of LLMs is an evidence to the contrary. That Language->Intelligence is more likely than Intelligence->Language.

The paragraph you've cited just reminds us that it's not something that is obviously true.

It's also possible that there is some elegant, abstract "intelligence" concept (analogous to arithmetic) which evolution built into us but we don't understand yet and from which language developed. It just turns out that if you already have language, it's easier to work backwards from there to "intelligence" than to build it from scratch.

I mean, it kind of does fine at arithmetic? 

I just gave gpt3.5 three random x plus y questions, and it managed one that I didn't want to bother doing in my head.

I can use a laptop to hammer in a nail, but it's probably not the fastest or most reliable way to do so.