Recently I've been hearing a lot about AGI, specifically that it's 5-10 years out. As someone with an interest in neuroscience, I don't understand how any system so much less complex than the human brain would be able to achieve such a thing. To me, I feel that current models are incapable of actual logical reasoning (which I know is a horribly vague idea -- sorry about that) and that any apparent logical reasoning that they are capable of is just a result of the fact that they have been trained on every possible verbal test of logical capacity. 

Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can't see it solving existing frontier problems in mathematical theory, as it's a completely different "skillset".

Is my understanding of how LLMs work flawed? Can they perform logical reasoning?

--
P.S. Apologies for the informalities as this is my first post. 

New Answer
New Comment

4 Answers sorted by

Carl Feynman

209

Well, you can point to several things current LLMs can’t do.  Not just logical reasoning, but also long-term action, and remembering what you said to them yesterday.  But ten years ago, you could make a much longer list of things AI couldn’t do.  And most items of that list have fallen before the advance of technology.  On what basis should we assume that the remaining list will last very long?  There are lots of people working on all the things currently on the list, as well as an ever-growing mountain of computer power that can be applied to these problems.  If we expect history to continue as it has done, all those problems will fall in the next decade.

Of course, it’s possible that AI will suddenly stop advancing; that happens to fields of engineering.  For example aeronautical engineering stopped advancing very suddenly in 1972, and even regressed somewhat.  That was a big surprise to everyone concerned.  But that’s not a common phenomenon.

aeronautical engineering stopped advancing very suddenly in 1972, and even regressed somewhat

What is this referring to? What happened in 1972…?

If the increase in speed had continued at the rate it did from 1820 to 1961, we’d be faster than the speed of light by 1982. This extrapolation is from an article by G. Harry Stine in Analog, in 1961.  It was a pretty sloppy analysis by modern standards, but gives an idea of how people were thinking at the time.
 

These all happened in 1972 or close to it:

—Setting the air speed record, which stands to this day.

—End of flights to the Moon.  
—Cancellation of the American SST project.   

—Cancellation of the NERVA nuclear rocket program.

—The Boeing 747 enters service as the largest passenger plane until 2003.

—Concorde enters service, turns out to be a bad idea.

In the ‘80s, I found an old National Geographic from 1971 or 1972 about the “future of flight”.  Essentially none of their predictions had come true.  That’s why I think it was a surprise.

4dr_s
TBF, was Concorde inherently "a bad idea"? Technologies have a theoretical limit and a practical one. There's deep reasons why we simply couldn't reach even near speed of light by 1982 no matter how much money we poured into it, but Concorde seems more a case of "it can be done, but it's too expensive to keep safe enough and most people won't pay such exorbitant tickets just to shave a few hours off their transatlantic trip". I don't think we can imagine such things happening with AGI, partly because its economic returns are obvious and far greater, partly because many who are racing to it have more than just economic incentives to do so - some have an almost religious fervour. Pyramids can be built even if they're not efficient.
5Carl Feynman
Funny thing— your message seemed to be phrased as disagreeing, so I was all set to post a devastating reply.  But after I tried to find points of actual disagreement, I couldn’t.  So I will write a reply of violent agreement. Your points about the dissimilarity between aerospace in 1972 and AI in 2024 are good ones.  Note that my original message was about how close current technology is to AGI.  The part about aerospace was just because my rationalist virtue required me to point out a case where an analogous argument would have failed.  I don’t think it’s likely.   Was Concorde “inherently a bad idea”?  No, but “inherently” is doing the work here.  It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged.  It didn’t matter how glorious, beautiful or innovative it was.  It’s a pyramid that was built even though it wasn’t efficient. The impossibility of traveling faster than the speed of light was a lot less obvious in 1961. 
4dr_s
I would argue that's questionable - they knew relativity very well in 1961 and all the physicists would have been able to roll out the obvious theoretical objections. But obvious the difficulties of approaching the speed of light (via e.g. ramscoop engine, solar sail, nuclear propulsion etc) are another story. I guess my point is that there are objective limits and then there are cultural ones. We do most things only for the sake of making money, but as far as human cultures go we are perhaps more the exception than the rule. And in the end individuals often do the opposite - they make money to do things, things they like that play to their personal values but don't necessarily turn out a profit all the time. A different culture could have concluded that the Concorde was a success because it was awesome, and we should do more of that. In such a culture in fact the Concorde might even have been a financial success, because people would have been more willing to pay more money to witness it first hand. Since here the argument involves more the inherent limits of technology and/or science, I'd say we should be careful to separate out cultural effects. Self-sustaining Mars colonies, for example, are probably a pipe dream with current technology. But the only reason why we don't have a Moon base yet is that we don't give enough of a shit. If we cared to build one, we probably could have by now.

Charlie Steiner

90

The human brain is complex, but a lot of this is learned complexity, it's not there in the genome.

The genome might still be spending tens or hundreds of kilobytes on preprogramming circuits in the cerebellum and brainstem (babies have to breathe from minute 0, that can't be learned behavior.), but the majority of the complexity in the (human) brain is learned complexity. We can do calculus not because our genomes programmed calculus in directly, but because humans as they grow up are able to learn about the world in a very general way, and this incidentally teaches us to manipulate complex ideas. (Also see this post and its sequels that elaborates on this idea in more detail.)

Modern AI is sort of taking this to its logical endpoint. If an AI doesn't need to breathe, or regulate its body temperature, or drink milk, or cry to wake up its parents from minute 0, then it can drop a lot of those preprogrammed behaviors and really just double down on learning about the world in a general way. It can have a short "genome" that tells it how to develop (even though the eventual "organism" - the trained AI - will be many many times the size of the "genome").

Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can't see it solving existing frontier problems in mathematical theory

You might be focusing too much on current experience with LLMs because LLMs are the new hotness. Present-day LLMs are learning systems that have only ever been trained on text in small snippets with no long-term pattern. They are a hundred times smaller than the human brain (if you're very handwavy about estimating the "number of parameters" of a human brain). It's an impressive emergent property that they can produce long-term coherent text at all.

If you took the same general learning algorithm, but trained it from the start to produce long-term coherent text in interaction with an environment, and made it 100x larger, I don't think it's unreasonable to think it might start learning to prove theorems.

dr_s

62

I think in practice we don't know for sure - that's part of the problem - but there are various reasons to think why this might be possible with vastly less complexity than the human brain. First, the task is vastly less complex than what the human brain does. The human brain does not handle only conscious rational thought, it does a bunch of other things that mean it still fires at full cylinders even when you're unconscious. Second, lots of artificial versions of natural organs are vastly less complex than their inspiration. Cameras are vastly less complex than eyes. Plane wings are vastly less complex than bird wings. And yet these things outperform their natural counterparts. To me the essence of the reason for this is that evolution deals in compromises. It can never design just a camera. The camera must be made of organic materials, it must be self organising and self repairing, it must be compatible with everything else and it must be achievable via a set of small mutations that are each as or more viable than the previous one. It's all stumbling around in the dark until you hit something that works under the many, many constraints of the problem. Meanwhile, artificial intelligent design on our part is a lot more deliberate and a lot less constrained. The AI itself doesn't need to do anything more than be an AI - we'll provide the infrastructure, and we'll throw money at it to keep it viable until it doesn't need it any more, because we foresee the future and can invest on it. That's more than evolution can do, and it's a significant advantage that can compensate for a lot of complexity.

ErioirE

10

 Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can't see it solving existing frontier problems in mathematical theory, as it's a completely different "skillset".

Most non-mathmatician humans such as myself are arguably in the same boat for this specific example. I certainly wouldn't know how to begin to work on frontier mathematical theory, but despite this if I were an AI I would fit many definitions of an AGI, albeit a lowly human-level one.

It would be useful to have a distinction between 'routine' logical reasoning (being able to apply established knowledge dynamically) and novel logical reasoning (being able to synthesize new theories/ideas from existing ones). 
LLMs can arguably do the first but not the second, however the line dividing the two can be hazy. Many "novel advances" are not 'new' as much as existing approaches from different domains being recombined to solve a different type of problem than they had previously been used for. Is this routine or novel? Could an LLM do this? Particularly if the problem were phased to be tabooed/abstractified to avoid using leading questions/keywords that would encourage the LLM to regurgitate the common approaches?

An impressive thing about LLMs is how well they can work with arbitrary problems, so long as the problems are not too far out of the training distribution.

An example:
I asked ChatGPT 4o:
"A widget = 14
A phro = -3
A midge = 5
An attached wrok multiplies value by 0.5

What is the value of: A widget with an attached wrok plus three phros and two midges, and three wroks by themselves?"

Solving this sort of (simple to a human) word problem requires combining the domains of basic math and conceptual relations between objects (i.e. <object> can have <relation> to other <object> which interacts with the properties of that thing) and also the ability to make assumptions from missing information (no explicit value was assigned to solitary wroks)

GPT answer:

"To find the value of the given expression, let's break it down step by step.

1. **Value of a widget with an attached wrok:**
  - A widget is worth 14.
  - An attached wrok multiplies the value by 0.5.
  - So, the value of a widget with an attached wrok is .

2. **Value of three phros:**
  - A phro is worth -3.
  - So, three phros are .

3. **Value of two midges:**
  - A midge is worth 5.
  - So, two midges are .

4. **Value of three wroks by themselves:**
  - Since a wrok by itself is not attached to anything and we don't have a specified standalone value for a wrok, we can assume it doesn't add any value by itself. So, three wroks by themselves are worth .

Now, we need to add all these values together:

First, let's handle the addition of 7 and -9:

Then, add the result to 10:

And finally, add the result to 0:

So, the value of a widget with an attached wrok plus three phros and two midges, and three wroks by themselves is ."

Is this logical reasoning? Depends on how you define it. It's certainly useful. How much of the thinking we humans do is 'mere' pattern matching with variable inputs? I suspect a large amount.

3 comments, sorted by Click to highlight new comments since:

Generally folks here aren't going to want to list off all the capabilities improvements they can think of. But as a way of forecasting it without having to get into the actual gears-level modeling: 10 years ago, what could artificial neural networks do?

[-]O O10

I think the typical response from a skeptic here would be we may be nearing the end of a sigmoid curve.

[-]O O20

Not exactly an answer, but have you read about cases of the incredible plasticity of the human brain? There is a person out there that gradually lost 90% of their brain to fluid leakage and could still largely function. They didn’t even notice it until much later. There are more examples of functioning people who had half their brain removed as a child. And just like our scaling laws, those people as kids just learned slower, and to a lower depth, but still learned.

This tells me the brain actually isn’t that complex, and features aren’t necessarily localized to certain regions. The mass of neurons there, if primed properly, will create intelligence.

It’s also clear from the above that the brain has a ton of redundancy built in, likely to account for the fact that it’s in a moving vessel subject to external attacks. There are far fewer negative selection pressures on an Nvidia gpu. It also has a larger energy budget.