All of mana's Comments + Replies

mana
41

2 days ago and I might already have to adjust the timelines.

Nvidia's new Digits costs 3K and is the size of a mac mini. Two of them can supposedly run a 400B parameter language model which is crazy. So maybe the hardware issues aren't as persistent for robotics.

And also Hailuo has a single-image reference mode now that works like a lora. It's super consistent for faces, even if the rest is a bit quirky.

mana
21

Gemini 1206 Exp has a 2 million token context window, even if that isn't the effective context it probably performs much better in that regard than gpt 4o and such. Haven't tested yet because I don't want to get ratelimited from ai studio incase they monitor that

Frankly the "shorter" conversations I had at a few tens of thousand of tokens were already noticeably more consistent than before, e. g. it referenced previous responses significantly later

mana
20

Great calls for 2024, I'd say most are atleast partially accurate.

However, looking at 2026, you definitely underestimated the pace of txt2video development like myself. Given that Veo 2 can already make sequences with cuts and show the same subject across both clips, the 60s consistency will probably be reached in 2025. However above DALL-E 3's quality, that has been surpassed now.

I'd say in late 2026 at the earliest or more realistically late 2027 because of compute constraints we'll see a product that can generate coherent feature film length, optionally... (read more)

2HunterJay
I agree, I definitely underestimated video. Before publishing, I had a friend review my predictions and they called out video as being too low, and I adjusted upward in response and still underestimated it.  I'd now agree with 2026 or 2027 for coherent feature film length video, though I'm not sure if it would be at feature film artistic quality (including plot). I also agree with Her-like products in the next year or two! Personally I would still expect cloud compute to still be used for robotics, but only in ways where latency doesn't matter (like a planning and reasoning system on top of a smaller local model, doing deeper analysis like "There's a bag on the floor by the door. Ordinarily it should be put away, but given that it wasn't there 5 minutes ago, it might be actively used right now, so I should leave it..."). I'm not sure the privacy concerns will trump convenience, like with phones. I also now think virtual agents will start to become a big thing in 2025 and 2026, doing some kinds of remote work, or sizable chucks of existing jobs autonomously (while still not being able to automate most jobs end to end)!