boazbarak

Wiki Contributions

Comments

Sorted by

I was thinking of this as a histogram- probability that the model solves the task at that level of quality

I indeed believe that regulation should focus on deployment rather than on training.

See also my post https://www.lesswrong.com/posts/gHB4fNsRY8kAMA9d7/reflections-on-making-the-atomic-bomb

the Manhattan project was all about taking something that’s known to work in theory and solving all the Z_n’s

There is a general phenomenon in tech that has been expressed many times of people over-estimating the short-term consequences and under-estimating the longer term ones (e.g., "Amara's law").

I think that often it is possible to see that current technology is on track to achieve X, where X is widely perceived as the main obstacle for the real-world application Y. But once you solve X, you discover that there is a myriad of other "smaller" problems Z_1 , Z_2 , Z_3 that you need to resolve before you can actually deploy it for Y.

And of course, there is always a huge gap between demonstrating you solved X on some clean academic benchmark, vs. needing to do so "in the wild". This is particularly an issue in self-driving where errors can be literally deadly but arises in many other applications.

I do think that one lesson we can draw from self-driving is that there is a huge gap between full autonomy and "assistance" with human supervision. So, I would expect we would see AI be deployed as (increasingly sophisticated) "assistants' way before AI systems actually are able to function as "drop-in" replacements for current human jobs. This is part of the point I was making here. 

Some things like that already happened - bigger models are better at utilizing tools such as in-context learning and chain of thought reasoning. But again, whenever people plot any graph of such reasoning capabilities as a function of model compute or size (e.g., Big Bench paper) the X axis is always logarithmic. For specific tasks, the dependence on log compute is often sigmoid-like (flat for a long time but then starts going up more sharply as a function of log. compute) but as mentioned above, when you average over many tasks you get this type of linear dependence.

One can make all sorts of guesses but based on the evidence so far, AIs have a different skill profile than humans. This means if we think of any job a which requires a large set of skills, then for a long period of time, even if AIs beat the human average in some of them, they will perform worse than humans in others.

I always thought the front was the other side, but looking at Google images you are right.... don't have time now to redraw this but you'll just have to take it on faith that I could have drawn it on the other side 😀

>On the other hand, if one starts creating LLM-based "artificial AI researchers", one would probably create diverse teams of collaborating "artificial AI researchers" in the spirit of multi-agent LLM-based architectures,.. So, one would try to reproduce the whole teams of engineers and researchers, with diverse participants.

I think this can be an approach to create a diversity of styles, but not necessarily of capabilities. A bit of prompt engineering telling the model to pretend to be some expert X can help in some benchmarks but the returns diminish very quickly. So you can have a model pretending to be this type of person and that but they will suck at Tic-Tac-Toe. (For example, GPT4 doesn't know to recognize a winning move even when I tell it to play like Terence Tao.)

 

Regarding the existence of compact ML programs, I agree that it is not known. I would say however that the main benefit of architectures like transformers hasn't been so much to save in the total number of FLOPs as much as to organize these FLOPs so they are best suited for modern GPUs - that is ensure that the majority of the FLOPs are spent multiplying dense matrices.

Load More