All of Antb's Comments + Replies

Our attention is one of the most valuable resources we have, and it is now through recent AI developments in NLP and machine vision that we are realizing that it might very well be a fundamental component of intelligence itself. 

This post brings this point to attention (pun intended) by using video games as examples, and encourages us to optimize the way we use this limited resource to maximize information gain and to improve our cooperation skills by avoiding being 'sound absorbers'. 

Antb20

It is a nice thought experiment, but I've noticed that many AI researchers are devoted to their labour to the point of being comparable to religious fanaticism (on either camp, really). I don't think a fat pay check will make them stop their research so readily.

Raemon148

lol, I think Jason Crawford was coming at this from the opposite perspective of "this is already happening in lots of places and it's bad", rather than as a how-to manual. (But, I too am interested in it as a how-to manual)

Antb30

Very insightful, thanks for the clarification, as dooming as it is.

Antb20

A nuclear reactor doesn't try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement 'current LLMs are not agentic' holds true, which seems doubtful.

3jaspax
ChatGPT also doesn't try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn't provide this information unprompted, nor does it weave it into unrelated queries.
Antb146

As much as I agree that things are about to get really weird, that first diagram is a bit too optimistic. There is a limit to how much data humanity has available to train AI (here), and it seems doubtful we can make use of data x1000 times more effectively in such a short span of time. For all we know, there could be yet another AI winter coming - I don't think we will get that lucky, though.

6Vladimir_Nesov
This suggests that without much more data we don't get much better token prediction, but arguably modern quality of token prediction is already more than sufficient for AGI, we've reached token prediction overhang. It's some other things that are missing, which won't be resolved with better token prediction. (And it seems there are still ways of improving token prediction a fair bit, but again possibly irrelevant for timelines.)
porby1813

While there is a limit to the current text datasets, and expanding that with high quality human-generated text would be expensive, I'm afraid that's not going to be a blocker.

Multimodal training already completely bypasses text-only limitations. Beyond just extracting text tokens from youtube, the video/audio itself could be used as training data. The informational richness relative to text seems to be very high.

Further, as gato demonstrates, there's nothing stopping one model from spanning hundreds of distinct tasks, and many of those tasks can come from ... (read more)

Antb62

Thank you for posting this. Been 'levelling up' my maths for machine learning lately and this is just perfect.

Antb10

Even if pessimistic, it is invaluable to know that an idea is unlikely to succeed before you invest your only shot into it. 

Thanks for the pointers, I will research them and reformulate my plan.

3Charlie Steiner
Some reading recommendations might be Learning The Prior, and the AI Alignment Dataset project.