You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AdeleneDawner comments on Language, intelligence, rationality - Less Wrong Discussion

2 Post author: curiousepic 12 April 2011 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread.

Comment author: AdeleneDawner 12 April 2011 06:04:26PM *  4 points [-]

Interesting question. My thought is that the 'compression mode' model of language - that it doesn't actually communicate very much, but relies on the recipient having a similar enough understanding of the world to the sender to decode it - is relevant here. I'm not sure, but it seems at least plausible to me that English and other similar languages are compressed in such a way that while an AI could decode them, it wouldn't be very efficient and wouldn't necessarily be something that we would want.

ETA: If this is the case, conversational Lojban probably has the same problem, but Lojban appears to be extensible in ways that English is not, so it may do a better job of rising to the challenge by way of something like a specialized grammar.

Comment author: David_Gerard 12 April 2011 07:38:39PM 0 points [-]

My thought is that the 'compression mode' model of language - that it doesn't actually communicate very much, but relies on the recipient having a similar enough understanding of the world to the sender to decode it - is relevant here.

i.e., language is something that works on the listener's priors, like all intersubjective things.