AdeleneDawner comments on Language, intelligence, rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
Interesting question. My thought is that the 'compression mode' model of language - that it doesn't actually communicate very much, but relies on the recipient having a similar enough understanding of the world to the sender to decode it - is relevant here. I'm not sure, but it seems at least plausible to me that English and other similar languages are compressed in such a way that while an AI could decode them, it wouldn't be very efficient and wouldn't necessarily be something that we would want.
ETA: If this is the case, conversational Lojban probably has the same problem, but Lojban appears to be extensible in ways that English is not, so it may do a better job of rising to the challenge by way of something like a specialized grammar.
i.e., language is something that works on the listener's priors, like all intersubjective things.