Just a tweet I saw:
Yann LeCun
Doomers: OMG, if a machine is designed to maximize utility, it will inevitably diverge
![]()
Engineers: calm down, dude. We only design machines that minimize costs. Cost functions have a lower bound at zero. Minimizing costs can't cause divergence unless you're really stupid.
Some commentary:
I think Yann LeCun is being misleading here. While people intuitively think maximization and minimization are different, the real distinction is between convex optimization (where e.g. every local optimum is a global optimum) and non-convex optimization. When dealing with AGI, typically what people hope to solve is non-convex optimization.
Translating back to practical matters, you are presumably going to end up with some cost functions where you don't reach the lower point of zero, just because there are some desirable outcomes that require tradeoffs or have resource limitations or similar. If you backchain these costs through the causal structure of the real world, that gives you instrumental convergence for standard reasons, just as you get when backchaining utilities.
Very many things wrong with all of that:
This is very dumb, Lecun should know better, and I'm sure he *would* know better if he spent 5 minutes thinking about any of this.
I'm not sure he has coherent expectations, but I'd expect his vibe is some combination of "RL doesn't currently work" and "fields generally implement safety standards".