I think this is real, in the sense that they got the results they are reporting and this is a meaningful advance. Too early to say if this will scale to real world problems but it seems super promising, and I would hope and expect that Waymo and competitors are seriously investigating this, or will be soon.
Having said that, it's totally unclear how you might apply this to LLMs, the AI du jour. One of the main innovations in liquid networks is that they are continuous rather than discrete, which is good for very high bandwidth exercises like vision. Our eyes are technically discrete in that retina cells fire discretely, but I think the best interpretation of them at scale is much more like a continuous system. Similar to hearing, the AI analog being speech recognition.
But language is not really like that. Words are mostly discrete -- mostly you want to process things at the token level (~= words) or sometimes wordpieces or even letters, but it's not that sensible to think of text as being continuous. So it's not obvious how to apply liquid NNs to text understanding/generation.
Research opportunity!
But it'll be a while, if ever, before continuous networks work for language.
Thanks for your answer! Very interesting
I didn't know about the continuous nature of LNN; I would have thought that you needed different hardware (maybe an analog computer?) to treat continuous values.
Maybe it could work for generative networks for images or music, that seems less discrete than written language.
Then again...the output of an LLM is a stream of tokens (yeah?). I wonder what applications LTCs could have as a post-processor for LLM output? No idea what I'm really talking about though.
This is pure capabilities, and yes, it's a big deal.
If it works out-of-distribution, that's a huge deal for alignment! Especially if alignment generalizes farther than capabilities. Then you can just throw something like imitative amplification at it and it is probably aligned (assuming that "does well out-of-distribution" implies that the mesa-optimizers are tamed).
I have to dispute the idea that "less neurons" = "more human-readable". If the fewer neurons are performing a more complex task it won't necessarily be easier to interpret.
Definately. The lower the neuron vs 'concepts' ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe. Audio models? Definately. Tokens and/or semantic data? That doesnt seeem practical.
I just skimmed the video, but it seems like there's more salesmanship than there is explanation of what the network is doing, how its capabilities would compare to using e.g. a small RNN, and how far it actually generalizes.
Remember that self-driving cars first appeared in the 1980s - lane-keeping is actually a very simple task if you only need 99% reliability. I don't think their demos are super informative about the utility of this architecture to complicated tasks.
So I'd be interested if you looked into it more and think that my first impression is unfair.
I came across this video by MIT CSAIL.
Here is the article they are talking about: https://www.science.org/doi/10.1126/scirobotics.adc8892
This team claims to have achieved driving tasks that previously required 10000 neurons, while using only 19, by using "liquid neural networks" inspired by worm neurology.
They say this innovation brings massive improvements on performance, especially in embedded systems, but also in interpretability, since the reduced number of neurons makes the system much more human-readable. In particular, the attention of the system would be much more easily tracked; this would open the door to safety certifications for high-stakes applications.
Having tried driving and flying tasks in different conditions and environments, they also claim that their system is vastly better at out-of-distribution zero-shot tasks.
So basically, they believe they have made very substantial steps in pretty much every dimension that matters, both for performance and for safety.
As far as I can tell these are very serious researchers, but doesn't that sound a bit too god to be true? I have no expertise in machine learning and I haven't seen any third-party opinions on this yet, so I'm having a hard time making up my mind.
I'd be curious to hear your takes!