Abstract
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? This position paper proposes an architecture and training paradigms with which to construct autonomous intelligent agents. It combines concepts such as configurable predictive world model, behavior driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised learning.
Meta's Chief AI Scientist Yann Lecun lays out his vision for what an architecture for generally intelligent agents might look like.
I'm quite surprised by the lack of discussion on this paper. This is probably one of the most significant paper on AGI I've seen as it outlines a concrete, practical path to its implementation by one of the most important researcher in the field.
There is not a lot of discussion about the paper here on LessWrong yet, but there are a dozen or so comments about it on OpenReview: https://openreview.net/forum?id=BZ5a1r-kVsf