I would be currious to hear from people about their views or reactions to the Eureka Labs planned AI training course. 

While a very trivial level, I must confess to liking the name. When I saw the mention on MR the first thing I thought of was the TV show Eureka which I found quite entertaining and enjoyable.

More seriously I can currious about what quality the cource might be, I assume some here know something about Kartaphy and how his company might present the information. I would also be interested in thought from a pure AI Safety perspective. One might suspect some see this a bit as teaching any or everyone how to build a nuclear bomb or make TNT. But perhaps some see him as perhaps helping to get students thinking in terms of safety and alignment.

Update, I should probably also note that this probably doesn't really move the dial on anything much given the existing availability of online AI training from many. many sources. So "Big yawn." to the question might be the popular response.

New Answer
New Comment

2 Answers sorted by

mishka

30

This will likely be a very good AI capability-oriented course, approximately along these lines:

https://github.com/karpathy/LLM101n

Andrej is a great teacher, he has created a lot of very useful pedagogical materials helpful to people who want to learn AI. I have used some of them to my significant benefit, I am even citing his famous 2015 The Unreasonable Effectiveness of Recurrent Neural Networks in some of my texts, and I have used his minGPT and nanoGPT to improve my understanding of decoder-only Transformers and to experiment with them.

He is also a very strong practitioner, with an impressive track record, and I expect his new org will be successful in creating novel education-oriented AI.

Safety-wise, I think Andrej cares a lot about routine AI safety matters (being in charge of AI for Tesla autopilot for many years makes one to care for routine safety in a very visceral sense). I don't have a feel for his position on X-risk. I think he tends to be skeptical of AI regulation efforts.

The plan for their future AI course I link above does not seem to have any safety-oriented content whatsoever, but, perhaps, this might change, if people who can create that kind of content were to eventually join that effort.

Nisan

20

This is the perfect time to start an AI + education project. AI today is not quite reliable enough to be a trustworthy teacher; and in the near future generic AI assistants will likely be smart enough to teach anything well (if they want to).

In the meantime, Eureka Labs faces an interesting alignment problem: Can they ensure that their AI teachers teach only true things? It will be tempting to make teachers that only seem to teach well. I hope they figure out how to navigate that!