I tried to find some concrete exposition in the paper of what the authors mean by key words such as “organism”, “agent”, and so on, but to me the whole paper is fog. Not AI-generated fog, as far as I can tell, but a human sort of fog, the fog of philosophers.
Then I found this in the last paragraph of section 3:
The problem is that such algorithmic systems have no freedom from immediacy, since all their outputs are determined entirely—even though often in intricate and probabilistic ways—by the inputs of the system. There are no actions that emanate from the historicity of internal organization.
Well, that just sinks it. All the LLMs have bags of “historicity of internal organization”, that being their gigabytes of weights, learned from their training, not to mention the millions of tokens worth of context window that one might call “short-term historicity of internal organization”.
The phrase “historicity of internal organization” seems to be an obfuscated way of saying “memory”.
Thanks. So would you say I am right with the concern about the paper? Or is it fog only for other reasons? [I haven't yet read the link, so I don't yet know what exactly fog in this context means]
"Frontiers" is known to publish various sorts of garbage. If somebody comes to you to argue some controversial point with article from Frontiers, you can freely assume it to be wrong.
I've read the abstract of this paper, How Organisms Come to Know the World: Fundamental Limits on Artificial General Intelligence. It says:
This sounds very strange to me. I have basically 0 technical knowledge of AI, just the general ideas I've gathered reading in general. But I thought one of the main characteristics of it, is that it learns by itself (or with what we feed them). So, that precisely one does not need to "predifine a list of such uses", in the context of the abstract. Particularly, self-directed learning AIs are famous for coming up with totally unpredictable ways of achieving their goals. Isn't it? I guess in the language of the abstract this would mean that they are not treated algorithmically... I don't think anybody would do such an error and less so, be able to publish it. So, what am I missing?
I searched in LW and the EA forum and I didn't find any post or comment about this paper, so I thought I would post a question.