All of jakej's Comments + Replies

Even leaving issues of quantum physics aside, macroscopic physical objects like humans are unlikely to be very compressible (information-wise, that is). The author might feel that the number of lead atoms in their 36 molar tooth is not part of their Kolmogorov string, but I would argue that is is certainly part of a complete description. 

I don't know, just how compressible are we? I agree that the lead in my 36 molar is a part of my description, but anomalies such as these are always going to be the hardest part of compression since noise is not compr... (read more)

I imagine there could be two different compression strategies that both happen to produce a result of the same length, but cannot be merged.

I think this is correct, but I think of this as being similar to chirality - multiple symmetric versions of the same essential information. I think it also probably depends on the description language you use, so maybe in one language something might have multiple versions, but in another it wouldn't?

3Viliam
Yes, if there is no deep underlying reason why the two minimal descriptions should be same, and it "just happened", I would assume that with slightly different description language it would not happen. Even the "3A2B" vs "3ABB" example would stop working if encoding a number used a different number of bits than encoding a character.

To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.

5Bogdan Ionut Cirstea
There are some papers suggesting this could indeed be the case, at least for language processing e.g. Shared computational principles for language processing in humans and deep language models, Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain.