Just listened to this.
It's sounds like Harnad is stating outright that there's nothing an LLM could do that would make him believe it's capable of understanding.
At that point, when someone is so fixed in their worldview that no amount of empirical evidence could move them, there really isn't any point in having a dialogue.
It's just unfortunate that, being a prominent academic, he'll instill these views into plenty of young people.
Yes, there's an empirical way to make me (or anyone) believe an LLM is understanding: Ground it in the capacity to pass the robotic version of the Turing Test: i.e., walk the walk, not just talk the talk, Turing indistinguishable from a real, understanding person (for a lifetime, if need be). A mere word-bag in a vat, no matter how big, can't do that.
I think he was just taking about ChatGPT at that point, but I don't recall exactly what he said.
Cross-posted from New Savanna.
Stevan Harnad: AI's Symbol Grounding Problem, The Gradient podcast, August 31, 2023
Outline:
The podcast site also has links to Harnad’s webpages and to five selected articles. One of them in particular, about the structure of dictionaries, interested me. Here’s the citation, abstract, and a link:
Philippe Vincent-Lamarre, Alexandre Blondin Massé, Marcos Lopes,Mélanie Lord, Odile Marcotte, Stevan Harnad. The Latent Structure of Dictionaries. Topics in Cognitive Science 8 (2016) 625–659. DOI: 10.1111/tops.12211. (Open Access)
Finally, somewhere latish in the conversation Harnad made an incisive remark about the vexed issue of whether or not LLMs really understand language. The issue, he remarked, is not whether or not they understand language as we do, but how they can do so much without such understanding. YES, a thousand times yes.
He also noted that he enjoys working with, what was it? ChatGPT. So do I, so do I. And I haven’t the slightest suspicion, worry, or hope that it might be sentient. It is what it is.