Review

I have an intuition, and I may be heterodox here, that LLMs on their own are not sufficient, no matter how powerful and knowledgeable they get. Put differently, the reasons that powerful LLMs are profoundly unsafe are primarily social: e.g. they will be hooked up to the internet to make iterative refinements to themselves; or they will be run continuously, allowing their simulacra to act; etc. Someone will build a system using an LLM as a component that kicks things off.

I'm not making an argument for safety here; after all, the main reason nukes are dangerous is that people might use them, which is also a social reason.

I'm asking because I have not seen this view explicitly discussed and I would like to get people's thoughts.

New Answer
New Comment

1 Answers sorted by

Ben

10

I am no expert, but I agree with you. They are cool, they could be a component in something. But they seem like they are only doing part of the "intelligence thing".

 Eliezer seems to think they can do more: https://www.lesswrong.com/posts/qkAWySeomp3aoAedy/?commentId=KQxaMGHoXypdQpbtH.

I don't know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans. I think I see it when someone gets asked (in person) a question, and they start giving an answer immediately, then suddenly interrupt themselves to give an opposite answer. Usually the trend is that the first answer was the "social answer", something like "In this situation the thing my character does is to agree enthusiastically that your project that you are clearly super excited about is cool, and tell you I will work on it full steam." Then some other parts of the self kicks in: "Wait, after 30 seconds of consideration I have realised that this idea can never work. Let me prove it to you." At least to me it even feels like that, their is some "conversation continuer" component. Obviously the build-up of an AI doesn't have to mirror that of a human intelligence, but if we want to build something "human level" then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.

But they seem like they are only doing part of the "intelligence thing".

I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the "intelligence thing", including planning, induction, and search, and even more if you include minor external capabilities like storage.

I don't know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans

I know... (read more)