It is a nice thought experiment, but I've noticed that many AI researchers are devoted to their labour to the point of being comparable to religious fanaticism (on either camp, really). I don't think a fat pay check will make them stop their research so readily.
lol, I think Jason Crawford was coming at this from the opposite perspective of "this is already happening in lots of places and it's bad", rather than as a how-to manual. (But, I too am interested in it as a how-to manual)
Very insightful, thanks for the clarification, as dooming as it is.
A nuclear reactor doesn't try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement 'current LLMs are not agentic' holds true, which seems doubtful.
As much as I agree that things are about to get really weird, that first diagram is a bit too optimistic. There is a limit to how much data humanity has available to train AI (here), and it seems doubtful we can make use of data x1000 times more effectively in such a short span of time. For all we know, there could be yet another AI winter coming - I don't think we will get that lucky, though.
While there is a limit to the current text datasets, and expanding that with high quality human-generated text would be expensive, I'm afraid that's not going to be a blocker.
Multimodal training already completely bypasses text-only limitations. Beyond just extracting text tokens from youtube, the video/audio itself could be used as training data. The informational richness relative to text seems to be very high.
Further, as gato demonstrates, there's nothing stopping one model from spanning hundreds of distinct tasks, and many of those tasks can come from ...
Thank you for posting this. Been 'levelling up' my maths for machine learning lately and this is just perfect.
Even if pessimistic, it is invaluable to know that an idea is unlikely to succeed before you invest your only shot into it.
Thanks for the pointers, I will research them and reformulate my plan.
Our attention is one of the most valuable resources we have, and it is now through recent AI developments in NLP and machine vision that we are realizing that it might very well be a fundamental component of intelligence itself.
This post brings this point to attention (pun intended) by using video games as examples, and encourages us to optimize the way we use this limited resource to maximize information gain and to improve our cooperation skills by avoiding being 'sound absorbers'.