Alexis Farmer

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

One problem is the assumption that grammar is distinct from language and language is distinct from reality. Large language models do have an understanding of the world. It's not correct to say that they don't have a concept of truth. To know whether a sentence A is more probable than sentence B then you need to have an understanding of what configuration of objects and events those sentences represent in reality. And which of those is more likely. Ilya Suskever, one of the main inventors behind GPT believes that with enough data these models will figure out everything. Like they should model physics. Chemistry. Biology, any kind of rules. They just look for patterns in the input. Chomskys objections come down to belief. I think it's a belief in the uniqueness of humans and the unwillingness to believe that at some level our thinking is only mechanistic.