Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
mgm45210

Agree with ws27a that it's hard to pick a certain point in the evolution of models and state they now have a world model. But I think the focus on world models is missing the point somewhat. It makes much more sense to define understanding as the ability to predict what happens next than to define it as compression which is just an artifact of data/model limitations. In that sense, validation error for prediction "is all you need." Relatedly, I don't get why we want to "incentivise building robust internal algorithms and world models" -- if we formulate a goal-based objective instead of prediction, a model is still going to find the best way of solving the problem given its size and will compromise on world model representation if that helps to get closer to the goal. Natural intelligence does very much the same...