(The following is a heavily edited version of an interview with Daniel Murfet. For a primer on his research in singular statistical learning theory and its relation to AI safety see the excellent new explainer by Jesse Hoogland: "Neural networks generalize because of this one weird trick" )

[special thanks to Eddie Jean for editing]

Alexander: All right, so maybe you can start by introducing yourself.

Daniel: Sure. I'm Daniel Murfet, an algebraic geometer by training. I've worked in a variety of areas including mathematical physics, logic, and now statistical learning theory.

I'm fascinated by the possibility that universal phenomena in learning machines can be understood using singularity theory, much as universal phenomena in physics can be understood using singularity theory. Maybe some of that will be useful in understanding the deep issues that are raised by the current era of machine learning, in which we train larger and larger models on larger and larger data sets and observe emergent capabilities that are surprising and maybe a bit concerning.

Alexander: How did you get interested in singular learning theory?

Daniel: Let me think. I learned some functional programming languages and by going down that rabbit hole got interested in linear logic. And then I've thought about linear logic for a number of years. Then I thought about the semantics of linear logic and how to put more geometry in that, which led me to differential linear logic. Chasing that rabbit hole in turn led me to think about learning proofs and programs from data.

At the same time, friends of mine in California were working on deep learning as engineers. This was when I was at UCLA, probably around 2013. That’s when I started thinking that deep learning was interesting, but I was kind of coming at it from this very logical angle.

I was quite interested in the Neural Turing Machine. The NTM was an attempt to build a differentiable version of a Turing machine that you could train end-to-end. And I have some paper that I never published that is a kind of meld of differential linear logic and the NTM in some kind of crazy way. 

This inspired me to rethink the Curry-Howard correspondence. If proofs are related to programs and you have a new way of creating programs by learning them, then what does this mean on the proof theoretic side? Does it illuminate the nature of proof in some way to view proofs as not only constructed by deduction rules, or built up from conclusions in the way you like, but constructed by a learning process? 

So that was how I got into deep learning, through the NTM.

And then, I read the AlphaGo paper very carefully. There's something about the AlphaGo paper which I think is underappreciated, which is that you can turn off the Monte Carlo Tree Search part, and the pure convnet will still play Go at the level of a grandmaster. That's a striking observation, which leads you to the idea that perception is closer to reasoning than you might think. 

Alexander: Reasoning is pattern matching?

Daniel:  Perhaps. If you learn the right way of perceiving the problem, then on top of that you have to do very little reasoning.

So, I was quite struck—as someone interested in logic—by the idea that a lot of the hard work is simply done in the perceptual part. And thinking that through caused me to start taking deep learning seriously.

That led to a paper on collaborators on Deep RL with Transformers in it, which we wrote as a follow up to a 2019 paper from DeepMind “Deep Reinforcement Learning with Relational Inductive Biases”, which was maybe the first paper to combine self-attention with Deep RL.

I thought that paper was super cool because I was convinced by AlphaGo that somehow if you could just perceive things well, then on top of that you could build a very limited reasoning system and that would go a long way. And then along comes this paper and does exactly that. So I was convinced at that point that Deep RL + Transformers would do interesting things. 

After writing that paper I was familiar with that part of the deep learning literature and with Transformers in particular. Around that time I noticed there was a paper out of Baidu “Deep learning scaling is predictable, empirically” which was the first paper, I think, talking about scaling laws in Transformer models. That was September 2019.

That was before the paper out of OpenAI on scaling laws that people usually cite. When that paper came out, I saw it and was immediately like, holy fuck.

As I said, given the centrality of perception to reasoning, I already suspected that it was possible to do human-level reasoning with deep learning-based systems. And then I saw this scaling law paper. You connect the dots on those two beliefs, and you suddenly have a different perspective on what the future’s gonna look like, right?

I mean I didn't anticipate how quickly it would happen, but I believed, I suppose, in a combination of those two things going pretty far. So I started thinking very seriously about switching my research agenda to think about that in some form.

And then I remembered this book of Watanabe’s “Algebraic geometry and statistical learning theory”  that I'd seen years before. The first time I picked up Watanabe’s  book… that was maybe six years ago. Yeah, I didn't get it at all. I couldn't really make anything of it. And it seemed just kind of wild to me at that time, I didn't know much about statistics or machine learning. But once I saw that scaling law paper, I was prepared to think more seriously about deep learning as a mathematician. Then it occurred to me to look again [at Watanabe’s book], and the second time I was prepared to grasp what it was talking about. 

Now I can say that the two books that have influenced me the most, mathematically, are the EGA and Watanabe’s book. So it had a profound impact on me.

Alexander:  So, this moment when you connected the dots—the scaling laws, this idea of reasoning being related to perception, which maybe you can scale all the way to human-level reasoning—when was this? 2019?

Daniel: Yeah, this paper [Hestness et al.] came out in September 2019, and I think I read it when it came out.

Alexander: You had heard about AI safety back then?

Daniel: No, I think. For a long time after having this realisation about scaling laws etc. I just thought, “hey this is going to be super cool.” When did Bostrom’s book come out?

Alexander: 2012 2014

Daniel: That's interesting. I think I read that when it came out. So, I suppose that means that I didn't take it very seriously.

When I first started looking at SLT, I thought about it as having a similar relationship to the emerging capabilities of AI as the relation between thermodynamics and steam engines. 

If you were sitting there in the industrial revolution watching steam engines be built, well, suddenly thermodynamics seems pretty interesting. So I thought about it in that vein. This is a very important transformative technology. As a mathematician, I wasn't necessarily interested in committing my life to make it work better or make it go faster, but I was curious to understand why it worked because maybe that would be deep and interesting. Yeah, so I hadn’t paid much attention to Alignment or safety until maybe GPT-2, when I started reading about these topics. I don't remember specifically, to be honest. I'd read some of Eliezer Yudkowksy’s posts years ago. But I think it wasn’t until quite recently, maybe two years ago, partly because of the interest of my student Matthew Farrugia-Roberts, that I started thinking about this more carefully.

Alexander: What was your impression when you read Yudkowksy?

Daniel: Let me try and remember, it's, it's hard not to not to see it in light of the way I currently think about it.

*long pause* 


 

Actually, when was Gwern’s scaling hypothesis paper?

Alexander: 2020 I think. He continually updates his stuff… let me check. Yeah 2020.

Daniel: I think that was what jolted me out of my complacency. So, I think I had not really had the courage of my convictions before that.

I believed in the scaling laws, but it was the belief of an academic, unconnected to practical actions beyond my research agenda.

Alexander:  Were there other people around that were concerned or interested with this stuff? Humans do a form of social reasoning as well, like where we only take things seriously when others do.

Daniel: There was nobody, mathematically or socially, around me here in Melbourne that was paying attention to these issues, One of my friends, Adam, who runs the Disruption seminar at metauni and who has always been much more optimistic about these things, has recently started to share the same concerns around AI safety that I have. I would say he wasn't concerned about that before, in 2019–early 2020. At that time, almost every mathematician I knew thought deep learning was kind of stupid and a fad and would go away. I tried telling them about scaling laws and what I thought they might mean. Mostly they thought I was insane, you know, drinking the Google Kool Aid.

Alexander: =D. And these were mathematicians working in learning theory, or were they more pure?

Daniel: No, there wasn’t really anyone in Melbourne working on learning theory. These are other pure mathematicians. Many of them had barely even heard of deep learning in 2020. 

Alexander: Really? And they still haven’t?

Daniel: So my new PhD student Zhongtian hadn’t heard about GPT-3 until three weeks ago. So I got to show him the Playground on the OpenAI website and blow his mind. I asked it to factor some polynomials and tell me their roots and it got it right. 

Alexander: I have to say that when I give demonstrations, it doesn't always work …

Look: the end of the world… any day now 

Just not when I'm prompting it.

Daniel: =D  yeah, you have to get lucky sometimes.

I think Gwern’s post got me started. I suppose I had cognitive dissonance about this issue; I believed in the importance of the scaling laws for a long time but because the people around me didn't even take deep learning seriously, let alone the scaling laws, I felt so far out on the limb just believing in all that, that I wasn't able to connect the dots and see what the potential risks were.

It wasn't until Gwern’s post that I really started thinking about it.

That had a big impact on me, that article, just forcing me to sit there and think. That got me to realise that I was committing this fallacy that he points out very clearly in that article, which is thinking in terms of academic timelines and academic budgets, right? For example, thinking that it's expensive to train GPT-2. But in reality, it's not expensive, right? It's so cheap compared to the infrastructure costs of a chemical plant or a nuclear plant.

Alexander: So I'd like to circle back to AlphaGo. You told me before about your reaction to an interview with Lee Seedol, the reigning Go World Champion who played AlphaGo 4-1. Could you talk more about that now?

Daniel: There was a press conference where they interviewed Lee Sedol, and he said something like: “Well it's sad to have lost but we will learn a lot from AlphaGo and this is a new era for Go.”

And the moment I heard that I thought: bullshit.

It was clear to me that there would be, in domain after domain of human experience, some human standing up and saying “hey, it's actually great we got our asses whooped—now we get to learn from the machines!”

That is not what they're thinking and that's not what's going to happen. Like, actually, their soul is crushed. What they're doing has lost all meaning for them, and they're going to go off and grow potatoes now.

I knew because I could put myself in his position, right? Because Go is very different from mathematics, but the motivations for doing it are not that different, in my opinion. I mean, I don’t know any professional Go players personally, so I’m just guessing. But it's this insanely focused thing you do from a very young age, it completely occupies your life, you find meaning in it — partly because you can show you're smarter than other people, but that’s not the only reason. At the level of Lee Sedol, one motivation – and he said this well before the AlphaGo events – is that he wants to find a new idea in Go that shapes the game for other people and pushes forward the human boundaries on what is possible. That’s a really beautiful idea. Worth spending your life on.

So I accept that to a large degree that is what's motivating him. And of course as mathematicians that's—at least, you would like to think, idealistically— also what motivates us much of the time.

Alexander: A shard of the platonic heavens?

Daniel: Yeah.

So that's at least what I read into Lee Sedol, that he's motivated by that. And so when the machine comes along and it's like, “Oh look at those cute ideas! Oh that's so nice!” SWOMP. “Here’s a better idea.” It’s maddening.

And in that moment if you’re Lee Sedol you think, okay, whatever new beautiful ideas there are, they're not coming from me, I'm in the kiddies corner now. And maybe we can all have fun playing Go, but the deep ideas won't be coming from me. So what am I doing here? I'll go spend time with my family now, thanks.

Alexander: Did that have an actual impact on you to change something?

Daniel: Oh, it completely changed my attitude towards mathematics. Yeah.

It took a while to germinate, but that's ultimately the reason why I'm thinking about singular learning theory now, rather than topological field theory or bicategories on any of the other things I used to care about.

I've tried a little to communicate this to other mathematicians; I've given some colloquiums on AlphaGo, but I think it was in some sense too early. This was like in 2017, 2018, and people were just not ready to hear about it. I think they'll be ready soon.

I found mathematics meaningful because I thought of myself as putting another brick in a wall that had been assembled by humans for thousands of years and would continue being assembled by humans.

And it seemed to me that it made a difference when I added my brick to the wall. Now one has to be realistic: even for the biggest ideas, like quantum mechanics or general relativity or schemes or whatever, if the particular people who discovered those ideas hadn't thought of them, they would have come up eventually. That brick would have gone in the wall, whether they were born or not. But you can bring the date forward by working on things.

It matters to me that I’m bringing things meaningfully forward. If the world had one million Grothendiecks and one million Kontseviches then probably I wouldn't be a mathematician, because I’m not that excited about bringing the date forward by five minutes.

I'm not content to just work on lemmas and filling in details of programs that other people have mostly worked out. In order to make the sacrifices of time and attention that it takes to do mathematics at a high level worth it—I actually want to do something, right? Not just entertaining myself and putting a little flourish on something that's essentially already finished.

And so at the time of AlphaGo I thought, okay. This is a long way, maybe, from a human-level intelligence, but the progress I’m seeing probably means that we’re gonna get there by the end of the century. And if that means that in a few decades my entire research career will be like 30 cents of electricity and five seconds of compute time... maybe I could do other things with my life. Math research isn't always that pleasant; it's hard, and exacts a toll on you. You could be doing other things.

Alexander: Like spending more time with your family?  

Daniel: Haha, well even before this soul searching I wasn’t a person that was locked in their office, 24 hours a day.

I started spending more of my attention on mathematics that seemed like it might be relevant. If a machine was going to do math, it started to seem more important to me to figure out what happened on the way to building that machine. It seemed more salient to me than the other things I cared about, knew about, could work on.

And it wasn't necessarily a conscious decision. I just noticed myself being less passionate about the old things. Right, so I just had to follow that. it's not like I stopped doing research, but my research interests reacted.

Alexander:This reminds me of this interview with Kontsevich

Alexander: In light of those examples of how people, not that long ago, weren’t very receptive to these new ideas, what do you think, Dan? Is there now fertile ground to get serious mathematicians interested in AI alignment?

Daniel: I think the proof assistants have certainly become a much more mainstream thing. People are very aware of this. Kevin Buzzard has been saying publicly he thinks computers will be proving interesting theorems soon. 

Gowers and Buzzard are established people. It’s really getting through, especially to young people. I think older people couldn't care less—most of them think deep learning is stupid and that computers will never prove anything interesting. But I think, increasingly, younger people are completely convinced of this. They just see it as inevitable.

I think it's probably easier now to convince people that there are interesting mathematical problems in AI in general that have to do with deep learning and so on. But that's a separate thing to your question.

Alexander: Oh no that’s very related. Go on!

Daniel:  I think there it's simply a matter of presenting the open problems in a way that grabs attention. Even just the scaling laws themselves. That's such a profoundly interesting phenomena that very few mathematicians have thought about. My impression is that physicists have paid some attention.

Usually the way it works is that physicists pay attention and then afterward mathematicians pay attention, so maybe it's just a matter of time. But that’s an opportunity where there's leverage to convince people this is interesting and deep. Right now the theory of deep learning looks a bit...  like it hasn't turned up deep enough problems to really gather the attention of mathematicians. If I think about talking to my friends and colleagues in mathematical physics or algebraic geometry, I wouldn't present anything from the theory of deep learning as it currently exists as having deep ideas you can think about.

 I'm not saying people aren't doing something, or that some of the issues like double descent aren’t kind of curious.. But for someone who thinks about string theory, there's a high bar for what counts as deep and interesting.

Alexander:  What does it mean to be deep and interesting for you and for them in this context?

Daniel: Something that points to something universal.

Something that is more than just some particular architecture and some particular system we happen to be training in 2021. Right. And if something does qualify, it’s the scaling laws.

You see them for Transformers, but also for LSTMs, they likely hold for a very large class of models. They’re true across so many orders of magnitude that it's clear we’ve discovered a new class of universal phenomena. In comparison a lot of the other things that people are studying in deep learning theory seem somewhat tied to specific architectures or the things we’re doing right now, and they might be interesting and useful now, but they don’t necessarily give a reason for people from other fields to look over and drop what they’re doing to start paying a lot of attention. 

Putting aside what it means in social terms or from the point of view of AI safety, from a purely scientific point of view, it’s easy to convince people that this is something new under the sun.

Alexander: So you actually started getting interested in statistical learning theory and deep learning through logic, specifically differential linear logic. How do you look at that now?

Daniel: * Long pause*  

It's not completely worked out in my head.

There’s some work I did with James Wallbridge and James Clifton and later my student Tom Waring that lays out a point of view of proofs as singularities. More precisely, you can think of proofs in, say, intuitionistic linear logic or programs for a Turing machine as being singularities of some analytic function in a way that is reasonably natural. The properties of the proof are reflected by the geometry of the singularity. It might develop into a deep connection between proof theory and geometry, but it’s still too early to say.

This translation of proofs or programs into the setting of a learning problem gives some intuition. If you have a loss function for, say, a learning problem like training a neural network, you may have many zeros, many global minima. And they may compute the same function in essentially different ways. I think this is an important intuition which is well illustrated by the geometry of a Universal Turing Machine.

Alexander: Seeing a function as an algorithm (“intensional) versus just looking at the set-theoretic graph of that algorithm (“extensional”)? 

Daniel: Yeah, that's right, exactly, so you can have two very different algorithms that compute the same function, and you can set up a correspondence between algorithms and learning problems, such that those two solutions become two different critical points or singularities of the same function. Two different global minima of your loss function.

Because they're different algorithms, those two different global minima have different geometry. 

Alexander: That's interesting because it seems to relate the intentional feature of how the algorithm runs with the sort of local neighbourhood of it when you try to learn it, which seems quite different..

Daniel: That's exactly right.

Alexander: These features seem unrelated, but you're saying they're actually very closely related??

Daniel: Yes! It's like when you look at two different algorithms that compute the same function and you know they're different, but like, well how are they different? It's like there's some intuitive thing, right? But when you translate them into the geometry, you can say precisely how they're different. For example, there are RLCTs (real log-canonical threshold -ed) which are different. There is actually a difference in the local behaviour of the learning problem, as you said. 

That's an intuition I found very useful, because almost certainly, that's a useful way of thinking about large Transformer models. So you have a large Transformer model, it knows many ways of computing the function it’s trying to compute. And they all coexist in some kind of equilibrium. Those different ways of computing are probably part of the reason why it’s able to work and generalise by making use of solutions to specialised subproblems. 

So, that connection between linear logic, proof theory and learning gave me useful vocabulary and way of thinking about things: a Universal Turing Machine is like a parameter space and a particular program on that Turing machine is like a particular global minima of some loss function. Given that connection, it's perhaps not illegitimate to think of, say, a large Transformer model as being a big machine which is like a UTM and different global minima as being codes for different Turing machines, which solve particular problems.

Alexander: I'd love to ask you more about this but I think there's one question, or one vein we should touch upon: risks from AI. What's your take now? How do you feel about that now?

Daniel: I read Bostrom's book when it came out. I was underwhelmed by it. I thought it was a kind of taxonomy, without too much content. And it didn't leave me that much more worried than I was before. I got the argument, I think: don't be the birds who raise an owl. You'll get eaten. But at that time I wasn't paying attention to deep learning and hadn't seen the scaling laws [ed- this was 2014 before scaling laws were known] and AGI seemed like a far-off prospect. It just seemed like it wasn't my department somehow. 

Then I saw the scaling laws and other aspects we talked about, and I started to believe that it was going to happen. And that, first of all, affected my research interests, but the impacts of that belief were still largely selfish, as in “poor me, maybe my very clever brain won't be so valuable anymore.”

I didn’t immediately revisit the argument about safety. Partly I was familiar enough with the field to be subject to a fallacy that has often been discussed, where experts underestimate the potential of a technology because they see how limited the current systems are. And I think this really did blind me because it just seemed kind of far-fetched to me that it would become…

Alexander: Actually agentic?

Daniel: Yeah, I couldn’t see how an AI system could come up with its own goals and pursue them in a way that was dangerous, for example.

Alexander: Not just a tool?

Daniel:  Right. This was my point of view on things until I saw in-context learning in GPT-3. Then I woke up. Also I should credit my student Matt Farrugia-Roberts who prompted me to read more of the safety and alignment literature—that's very recent. 

Alexander: So how do you feel about alignment now? Is it even possible? How hard is it, how possible is it? What's your current take?

Daniel: *long pause* 

I don't like my answer, but my honest assessment is that I don't think it's necessarily possible.

What I mean is, the high-level, back-of-the-napkin argument indicates that alignment may be impossible. The arguments for why an AGI could be dangerous and capable are easy and very simple: from scaling laws and some of the other things we’ve discussed, you can make a reasonable case that AGI is going to be work, and as soon as there’s a population of them evolution means that it's probably going to be dangerous to humans. It’s just the basic argument that in a competitive system full of competent agents competing for resources, things will go badly for some of the players.

So it seems there is a very robust argument for AGI that is dangerous and unaligned and probably very difficult to stop.

Compared to that the proposals for alignment seem like these very tricky, you’ve-got-to-get-lucky kind of things. For example, somehow escape race dynamics, keep your singleton very tightly caged, and make sure you don't have any guards that are talking to it at night who could be co-opted, and etc. etc. etc.

Alexander: Acausal gods?

Daniel: Right. It just seems that you gotta get lucky somehow. I don't claim to be an expert on alignment, so you tell me but that’s my impression.

I don't know. What do you see that 's like that?

Alexander: I think we have to get everything right. And we will.

Daniel: I'm not saying I'm necessarily pessimistic, but it does seem like a lot of hard work and non-obvious steps may be necessary to get from where we are to being in a position where it seems like alignment will be tractable.

Alexander: Can math help with alignment? Can deep math be useful?

Daniel: Yeah, I don't know. it's not yet obvious to me.

Alexander: In the platonic realm, in the timeless vault of mathematics… Is there a Grand Unified Theory of Intelligence?

Daniel: I think so, yeah, I mean that's why I'm interested in SLT. I think intelligence is as fundamental as physics, and as universal.

Alexander: Thank you for your time.


**********************************************************************************

links:

[1] MetaUni

[2] the Rising Sea


 

New Comment
1 comment, sorted by Click to highlight new comments since:

The Lee Sedol bit omitted what made it relevant, so to fill in the context; if you go to the WP article for Lee Sedol, this is how the summary now ends:

On 19 November 2019 [ie. 3 years after losing to AlphaGo], Lee announced his retirement from professional play, stating that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated".