Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language.
But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true.
Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT".
For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I agree that Searle believes in magic, but "intentionality" is not magic (see: almost anything Dennett has written).
This sounds interesting. Could you expand on this?
A list of references can be found in an earlier post in this thread.