There is a growing? fraction of people who consider LLMs to be AGI. And it makes sense. Clearly, when the term AGI was established this was what was meant: A machine that can tackle a wide range of problems, communicate with natural language, very different from all the examples of narrow AI.
It also prevents continuous goalpost moving. Will there ever be a point where the last step towards AGI has obviously just been made? Or will the complaints about limitations just slowly fade away?
However, most people do not consider LLMs AGI, as far as I can see for one or both of the following two reasons:
LLMs are not broadly human level in cognitive ability.
LLMs are not cognitively complete, i.e. they don't seem to have all the human cognitive faculties in a proportional manner.
The first points gestures towards TAI - transformative AI. AI systems that can automate a large part of the economy because they have reached a broadly human or super human level of cognitive ability. But TAI does not have to be AGI. It might not be necessarily particularly general.
I think it makes sense to pull these concepts apart and to stop arguing about the term AGI. LLMs are AGI - they are artificial, general and intelligent. They just aren't broadly human level and neither are they cognitively complete.
There is a growing? fraction of people who consider LLMs to be AGI. And it makes sense. Clearly, when the term AGI was established this was what was meant: A machine that can tackle a wide range of problems, communicate with natural language, very different from all the examples of narrow AI.
It also prevents continuous goalpost moving. Will there ever be a point where the last step towards AGI has obviously just been made? Or will the complaints about limitations just slowly fade away?
However, most people do not consider LLMs AGI, as far as I can see for one or both of the following two reasons:
The first points gestures towards TAI - transformative AI. AI systems that can automate a large part of the economy because they have reached a broadly human or super human level of cognitive ability. But TAI does not have to be AGI. It might not be necessarily particularly general.
The second point describes the obvious and the less obvious limitations of LLMs compared to the human mind, many of which are being engineered away as we speak.
I think it makes sense to pull these concepts apart and to stop arguing about the term AGI. LLMs are AGI - they are artificial, general and intelligent. They just aren't broadly human level and neither are they cognitively complete.