SilasBarta comments on Let's reimplement EURISKO! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
Hm, the abstract for that paper mentions that:
This is a really interesting point; it seems related to the idea that to be an expert in something, you need a vocabulary close to the domain in question.
It also immediately raises the question of what the expert vocabulary of vocabulary formation/acquisition is, i.e. the domain of learning.
It doesn't seem that interesting to me: it's just a restatement that "data compression = data prediction". When you have a vocabulary "close to the domain" that simply means that common concepts are compactly expressed. Once you've maximally compressed a domain, you have discovered all regularities, and simply outputting a short random string will decompress into something useful.
How do you find which concepts are common and how do you represent them? Aye, there's the rub.
So my guess would be that the expert vocabulary of vocabulary formation is the vocabulary of data compression. I don't know how to make any use of that, though, because the No Free Lunch Theorems seem to say that there's no general algorithm that is the best across all domains And so there's no algorithmic way to find which is the best compressor for this universe.
(ETA: multiple quick edits)