I think even "a detailed picture of the topic of the lesson" can be too high of an expectation for many topics early on. (Ideally it wouldn't be, if things were taught well, but they often aren't.) A better goal would be to have just something you understand well enough that you can grab on to, that you can start building out from.
Like if the topic was a puzzle, it's fine if you don't have a rough sense of where every puzzle piece goes right away. It can be enough that you have a few corner pieces in place, that you then start building out from.
I recall it heard claimed that a reason why financial crimes sometimes seem to have disproportionately harsh punishments relative to violent crimes is that financial crimes are more likely to actually be the result of a cost-benefit analysis.
Fantastic post. This has been frequently on my mind after reading it, and especially the surface/character layer split feels very distinct now that I have an explicit concept for it. And then at one point I asked it to profile me based on some fiction I co-wrote with it and it managed to guess that I was Finnish from something I didn't think had any clues in that direction, which gave me a novel feeling of getting a glimpse into that vast alien ground layer.
The analogy to the character and player distinction in humans also feels very apt.
Don't most browsers come with spellcheck built in? At least Chrome automatically flags my typos.
Thanks. Still not convinced, but it will take me a full post to explain why exactly. :)
Though possibly some of this is due to a difference in definitions. When you say this:
what I consider AGI - which importantly is fully general in that it can learn new things, but will not meet the bar of doing 95% of remote jobs because it's not likely to be human-level in all areas right away
Do you have a sense of how long you expect it will take for it to go from "can learn new things" to "doing 95% of remote jobs"? If you e.g. expect that it might still take several years for AGI to master most jobs once it has been created, then that might be more compatible with my model.
Hmm, some years back I was hearing the claim that self-driving cars work badly in winter conditions, so are currently limited to the kinds of warmer climates where Waymo is operating. I haven't checked whether that's still entirely accurate, but at least I haven't heard any news of this having made progress.
Thanks, this is the kind of comment that tries to break down things by missing capabilities that I was hoping to see.
Episodic memory is less trivial, but still relatively easy to improve from current near-zero-effort systems
I agree that it's likely to be relatively easy to improve from current systems, but just improving it is a much lower bar than getting episodic memory to actually be practically useful. So I'm not sure why this alone would imply a very short timeline. Getting things from "there are papers about this in the literature" to "actually sufficient for real-world problems" often takes a significant time, e.g.:
My general prior is that this kind of work - from conceptual prototype to robust real-world application - can in general easily take between years to decades, especially once we move out of domains like games/math/programming and into ones that are significantly harder to formalize and test. Also, the more interacting components you have, the trickier it gets to test and train.
Thanks. I think this argument assumes that the main bottleneck to AI progress is something like research engineering speed, such that accelerating research engineering speed would drastically increase AI progress?
I think that that makes sense as long as we are talking about domains like games / math / programming where you can automatically verify the results, but that something like speed of real-world interaction becomes the bottleneck once shifting to more open domains.
Consider an AI being trained on a task such as “acting as the CEO for a startup”. There may not be a way to do this training other than to have it actually run a real startup, and then wait for several years to see how the results turn out. Even after several years, it will be hard to say exactly which parts of the decision process contributed, and how much of the startup’s success or failure was due to random factors. Furthermore, during this process the AI will need to be closely monitored in order to make sure that it does not do anything illegal or grossly immoral, slowing down its decision process and thus whole the training. And I haven’t even mentioned the expense of a training run where running just a single trial requires a startup-level investment (assuming that the startup won’t pay back its investment, of course).
Of course, humans do not learn to be CEOs by running a million companies and then getting a reward signal at the end. Human CEOs come in with a number of skills that they have already learned from somewhere else that they then apply to the context of running a company, shifting between their existing skills and applying them as needed. However, the question of what kind of approach and skill to apply in what situation, and how to prioritize between different approaches, is by itself a skillset that needs to be learned... quite possibly through a lot of real-world feedback.
I think their relationship depends on whether crossing the gap requires grind or insight. If it's mostly about grind then a good expert will be able to estimate it, but insight tends to unpredictable by nature.
Another way of looking at my comment above would be that timelines of less than 5 years would imply the remaining steps mostly requiring grind, and timelines of 20+ years would imply that some amount of insight is needed.
True!