Review

Introduction

In response to recent advances in machine learning and the subsequent public outcry over the safety of AI systems, deep learning pioneers Yann LeCun, Yoshua Bengio, and Geoffrey Hinton (henceforth “the pioneers”) have spoken up about what they perceive to be the likely dangers as AI progress continues. Hinton and Bengio have advocated for AI safety in the same vein as the alignment community, while LeCun staunchly remains unconvinced that AI could pose an existential risk to humanity. The actions of Hinton and Bengio have lent credence to the alignment community as a whole, as one of the main criticisms of the movement had been that few modern machine learning experts endorsed it. Anecdotally, a friend of mine who had previously been unconvinced by the alignment community’s arguments was swayed by Hinton’s words this past May. 

This context raises the central question of this post: how much credence should one lend to the opinions of these deep learning pioneers on AI risk over other classes of experts? Was it epistemically sound for my aforementioned friend to shift his views so drastically due to the input of a single expert? And more specifically, should we expect the intuitions and knowledge of experimentally successful AI pioneers to generalize to predicting outcomes of artificial superintelligence?

Background

Geoffrey Hinton received a PhD in artificial intelligence from the University of Edinburgh in 1978. The history of the backpropagation algorithm (the central optimization tool in deep learning) is sort of murky but it’s generally agreed that Hinton, along with David Rumelhart, helped to popularize it in papers during the late 1980s. Hinton has primarily worked at the University of Toronto, where he was most notably a co-author on the 2012 AlexNet paper that kicked off the current boom in deep learning. He also co-authored papers on dropout and layer normalization, common regularization techniques in deep learning (both are used in the GPT family of models).

After completing a PhD in computer science from McGill University, Yoshua Bengio worked as a postdoc at Bell Labs where he assisted Yann LeCun on using backpropagation with convolutional neural networks to implement a handwritten check processor, one of the first big breakthroughs for neural networks. As a professor at the University of Montreal, he was the PI on the original GAN paper and introduced the attention mechanism to language translation tasks, paving the way for the first transformer paper. 

Yann LeCun received his PhD from Sorbonne University in 1987 and joined Hinton’s research group as a postdoc afterwards. He then joined Bell Labs to lead the aforementioned handwriting recognition project and pioneered an early sparsity effort called optimal brain damage. In 2003, LeCun left Bell Labs to become a professor at NYU, where he continued to work on computer vision, specifically within robotics. In 2013 LeCun was appointed as head of research at Facebook AI. 

Evaluation

One reason to lend credence to these pioneers is their knowledge base within deep learning, having worked in the field for 30-40 years each. However, this only lends them as much credence as any other deep learning expert, and for the details of modern state-of-the-art models they fall behind the researchers actually building such models. The SimCLR paper with Hinton as PI is the only big paper that any of the pioneers have authored in the past five years. This provides us a baseline–the pioneers should at least be provided credence as general deep learning experts, though not credence as cutting-edge experimentalists.[1]

It’s also important to note that among the broader class of AI researchers, the pioneers are some of the only researchers that actually got anywhere experimentally. When the pioneers began their careers, symbolic AI was still the dominant paradigm, and their intuitions in this domain were what guided them towards neural networks. From a 2023 interview, “‘My father was a biologist, so I was thinking in biological terms,’ says Hinton. ‘And symbolic reasoning is clearly not at the core of biological intelligence.’” Furthermore, they stuck with deep learning through a dry period from the late 1990s to the mid 2000s where there were very few developments in the field. On this side of the matter, the pioneers should be provided credence as “AI researchers whose predictions were accurate.”[2]

However, the pioneers’ intuitions might still be misguided, as it seems their initial inclination to work with neural networks was motivated for the wrong reasons: the efficacy of neural networks (probably) comes not from their nominal similarity to biological brains but rather the richness of high-dimensional representations. This wasn't well-understood when the pioneers entered the field; at the time, neural network approaches were rooted in cognitive science and neuroscience. While some concepts in deep learning have biological analogues, like convolutional layers, other tools like backpropagation bear little similarity to how our brains learn. Therefore, one should be wary not to award the pioneers too much credit for the later successes of neural networks.

And like pretty much everyone else, the pioneers did not predict the acceleration and course of AI progress within the last 3-5 years. While this does diminish their epistemic authority, it simultaneously reduces the credence one should have in any source trying to predict what the course of AI progress will look like. There are no real experts in this domain, at least not to the same level as the physicists predicting the capabilities of the atomic bomb, or even climatologists forecasting the rate at which the Earth will warm.

Thus, when looking to update one’s credence on AI safety by deferring to expert judgment, one should weigh the input of the three deep learning pioneers more than most other sources, but in general the weight placed on any individual expert should be less for AI x-risk forecasting than most other problems. The friend mentioned at the beginning should not have updated so steeply, but it’s understandable why he (and the general public) would, because analogous experts in other domains carry much more epistemic authority.

 

  1. ^

    Relatedly, one should have very low prior credence in schemes where some lauded AI researcher comes up with some grand unified scheme behind neural networks/intelligence as a whole, like LeCun’s I-JEPA. The older generation of deep learning researchers, like the pioneers, Jurgen Schmidhuber, etc., haven’t made any monumental discoveries on the cutting edge for quite some time so it’s unclear why their approaches should be lent substantial credence given the magnitude of the claims about these systems.

  2. ^

    Using similar logic, one should a priori discount the claims of non-DL AI researchers like Judea Pearl (pro-risk) and Melanie Mitchell (anti-risk) relative to the pioneers, since Pearl and Mitchell’s approaches to AI (Bayesian networks and genetic algorithms, respectively) haven’t gotten anywhere near the capabilities of deep learning. It’s also doubtful that these approaches are compute-limited in the same way that neural networks were for ~15 years, so it’s unlikely that we see large capabilities developments in the same way that we saw a jump for deep learning.

New Comment
2 comments, sorted by Click to highlight new comments since:

the pioneers’ intuitions might still be misguided, as it seems their initial inclination to work with neural networks was motivated for the wrong reasons: the efficacy of neural networks (probably) comes not from their nominal similarity to biological brains but rather the richness of high-dimensional representations

But they wanted to imitate the brain, because of the brain's high capabilities. And they discovered neural network architectures with high capabilities. Do you think the brain's capabilities have nothing to do with the use of high-dimensional representations? 

That doesn't change the fact that the pioneers really only pursued neural networks because of their similarity to the actual structure of the brain, not by first-principles reasoning about how high dimensionality and gradient descent scale well with data, size, and compute (I understand this is a high bar but this is part of why I don't think there are any real "experts"). And in their early career especially, they were all mired in the neurological paradigm for thinking about neural networks.

Hinton, who got close to breaking free from this way of thinking when he published the backprop paper, ends it by saying "it is worth looking for more biologically plausible ways of doing gradient descent." In fact, his 2022 forward-forward algorithm shows his approach is still tied to biological plausibility. A 2023 interview with the University of Toronto, he mentions that the reason he got concerned about superintelligence was that when working on the FF algorithm, he realized that backpropagation was just going to be better than any optimization algorithm inspired by the brain.