Sam Altman, CEO of OpenAI, was interviewed by Connie Loizos last week and the video was posted two days ago. Here are some AI safety-relevant parts of the discussion, with light editing by me for clarity, based on this automated transcript:
[starting in part two of the interview, which is where the discussion about AI safety is]
Connie: So moving on to AI which is where you've obviously spent the bulk of your time since I saw you when we sat here three years ago. You were telling us what was coming and we all thought you were being sort of hyperbolic and you were dead serious. Why do you think that ChatGPT and DALL-E so surprised people?
Sam: I genuinely don't know. I've reflected on it a lot. We had the model for ChatGPT in the API for I don't know 10 months or something before we made ChatGPT. And I sort of thought someone was going to just build it or whatever and that enough people had played around with it. Definitely, if you make a really good user experience on top of something. One thing that I very deeply believed was the way people wanted to interact with these models was via dialogue. We kept telling people this we kept trying to get people to build it and people wouldn't quite do it. So we finally said all right we're just going to do it, but yeah I think the pieces were there for a while.
One of the reasons I think DALL-E surprised people is if you asked five or seven years ago, the kind of ironclad wisdom on AI was that first, it comes for physical labor, truck driving, working in the factory, then this sort of less demanding cognitive labor, then the really demanding cognitive labor like computer programming, and then very last of all or maybe never because maybe it's like some deep human special sauce was creativity. And of course, we can look now and say it really looks like it's going to go exactly the opposite direction. But I think that is not super intuitive and so I can see why DALL-E surprised people. But I genuinely felt somewhat confused about why ChatGPT did.
One of the things we really believe is that the most responsible way to put this out in society is very gradually and to get people, institutions, policy makers, get them familiar with it, thinking about the implications, feeling the technology, and getting a sense for what it can do and can't do very early. Rather than drop a super powerful AGI in the world all at once. And so we put GPT3 out almost three years ago and then we put it into an API like two and a half years ago. And the incremental update from that to ChatGPT I felt should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that.
Connie: So you know you had talked when you were here about releasing things in a responsible way. What gave you the confidence to release what you have released already? I mean do you think we're ready for it? Are there enough guardrails in place?
Sam: We do have an internal process where we try to break things in and study impacts. We use external auditors, we have external red teamers, we work with other labs, and have safety organizations look at stuff.
Societal changes that ChatGPT is going to cause or is causing. There's a big one going now about the impact of this on education, academic integrity, all of that. But starting these now where the stakes are still relatively low, rather than just putting out what the whole industry will have in a few years with no time for society to update, I think would be bad. Covid did show us for better or for worse that society can update to massive changes sort of faster than I would have thought in many ways.
But I still think given the magnitude of the economic impact we expect here more gradual is better and so putting out a very weak and imperfect system like ChatGPT and then making it a little better this year a little better later this year a little better next year, that seems much better than the alternative.
Connie: Can you comment on whether GPT4 is coming out in the first quarter, first half of the year?
Sam: It'll come out at some point when we are confident that we can do it safely and responsibly. I think in general we are going to release technology much more slowly than people would like. We're going to sit on it for much longer than people would like. And eventually, people will be happy with our approach to this, but at the time I realize people want the shiny toy and it's frustrating. I totally get that.
Connie: I saw a visual and I don't know if it was accurate but it showed GPT 3.5 versus I guess what GPT4 is expected and I saw that thing on Twitter...
Sam: The GPT4 rumor mill is like a ridiculous thing I don't know where it all comes from. I don't know why people don't have like better things to speculate on. I get a little bit of it like it's sort of fun but that it's been going for like six months at this volume. People are begging to be disappointed and they will be. The hype is just like... we don't have an actual AGI and I think that's sort of what is expected of us and yeah we're going to disappoint those people.
[skipping part]
Connie: You obviously have figured out a way to make some revenue, you're licensing your model...
Sam: We're very early.
Connie: So right now licensing to startups you are early on and people are sort of looking at the whole of what's happening out there and they're saying you've got Google which could potentially release things this year, you have a lot of AI upstarts nipping at your heels, are you worried about what you're building being commoditized?
Sam: To some degree, I hope it is. The future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful. Like the cost of intelligence and energy, because it gets commoditized, trends down and down and down, and the massive surplus there, access to the systems, and eventually governance of the systems benefits all of us. So yeah I sort of hope that happens. I think competition is good at least until we get to AGI. I deeply believe in capitalism and competition to offer the best service at the lowest price, but that's not great from a business standpoint. That's fine, we'll be fine.
Connie: I also find it interesting that you say differing viewpoints or these AGIs would have different viewpoints. They're all being trained on all the data that's available in the world, so how do we come up with differing viewpoints?
Sam: I think what is going to have to happen is society will have to agree and set some laws on what an AGI can never do, or what one of these systems should never do. And one of the cool things about the path of the technology tree that we're on, which is very different than before we came along, and it was sort of Deepmind having these games that were like you know having agents play each other and try to deceive each other and kill each other, and all that, which I think could have gone in a bad direction. We know these language models that can understand language, and so we can say, hey model here's what we'd like you to do, here are the values we'd like you to align to. And we don't have it working perfectly yet, but it works a little and it'll get better and better.
And the world can say all right, here are the rules, here's the very broad-bounds absolute rules of a system. But within that people should be allowed very different things that they want their AI to do. And so if you want the super never offend safe for work model you should get that. And if you want an edgier one that is sort of creative and exploratory but says some stuff you might not be comfortable with or some people might not be comfortable with, you should get that.
I think there will be many systems in the world that have different settings of the values that they enforce. And really what I think -- and this will take longer -- is that you as a user should be able to write up a few pages of here's what I want here are my values here's how I want the AI to behave and it reads it and thinks about it and acts exactly how you want. Because it should be your AI and it should be there to serve you and do the things you believe in. So that to me is much better than one system where one tech company says here are the rules.
Connie: When we sat down it was right before your partnership with Microsoft, so when you say we're gonna be okay I wonder...
Sam: We're just going to build a fine business. Even if the competitive pressure pushes the price that people will pay per token down, we're gonna do fine. We also have this cap profit model so we don't have this incentive to just like capture all of the infinite dollars out there anyway. To generate enough money for our equity structure, yeah I believe we'll be fine.
Connie: Well I know you're not crazy about talking about deal-making so we won't, but can you talk a little bit about your partnership with Microsoft, and how it's going?
Sam: It's great. They're the only tech company out there that I think I'd be excited to partner with this deeply. I think Satya is an amazing CEO but more than that human being and understands -- so do Kevin Scott and McHale [?] who we work with closely as well -- understand the stakes of what AGI means and why we need to have all the weirdness we do in our structure and our agreement with them. So I really feel like it's a very values-aligned company. And there's some things they're very good at like building very large supercomputers and the infrastructure we operate on and putting the technology into products, there's things we're very good at like doing research.
[skipping part]
Connie: Your pact with Microsoft, does it preclude you from building software and services?
Sam: No -- we built -- I mean we just as we talked about ChatGPT3. We have lots more cool stuff coming.
Connie: What about other partnerships other than with Microsoft?
Sam: Also fine. Yeah, in general, we are very much here to build AGI. Products and services are tactics in service of that. Partnerships too but important ones. We really want to be useful to people. I think if we just build this in a lab and don't figure out how to get out into the world that's -- like somehow -- we're really falling short there.
Connie: Well I wondered what you made of the fact that Google has said to its employees it's too imperfect, it could harm our reputation, we're not ready?
Sam: I hope when they launch something anyway you really hold them to that comment. I'll just leave it there.
[skipping part]
Connie: I also wanted to ask about Anthropic, a rival I guess, founded by a former...?
Sam: I think super highly of those people. Very very talented. And multiple AGIs in the world I think is better than one.
Connie: Well what I was going to ask and just for some background it was founded by a former open AI VP of research who you I think met when he was at Google. But it is stressing an ethical layer as a kind of distinction from other players. And I just wondered if you think that systems should adopt a kind of a common code of principles and also whether that should be regulated?
Sam: Yeah that was my earlier point, I think society should regulate what the wide bounds are, but then I think individual users should have a huge amount of liberty to decide how they want their experience to go. So I think it is like a combination of society -- you know there are a few asterisks on the free speech rules -- and society has decided free speech is not quite absolute. I think society will also decide language models are not quite absolute. But there is a lot of speech that is legal that you find distasteful, that I find distasteful, that he finds distasteful, and we all probably have somewhat different definitions of that, and I think it is very important that that is left to the responsibility of individual users and groups. Not one company. And that the government, they govern, and not dictate all of the rules.
[skipping part]
Question from an audience member: What is your best case scenario for AI and worst case? Or more pointedly what would you like to see and what would you not like to see out of AI in the future?
Sam: I think the best case is so unbelievably good that it's hard to -- it's like hard for me to even imagine. I can sort of. I can sort of think about what it's like when we make more progress of discovering new knowledge with these systems than humanity has done so far, but like in a year instead of seventy thousand. I can sort of imagine what it's like when we kind of like launch probes out to the whole universe and find out really everything going on out there. I can sort of imagine what it's like when we have just like unbelievable abundance and systems that can help us resolve deadlocks and improve all aspects of reality and let us all live our best lives. But I can't quite. I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it.
And the bad case -- and I think this is important to say -- is like lights out for all of us. I'm more worried about an accidental misuse case in the short term where someone gets a super powerful -- it's not like the AI wakes up and decides to be evil. I think all of the traditional AI safety thinkers reveal a lot more about themselves than they mean to when they talk about what they think the AGI is going to be like. But I can see the accidental misuse case clearly and that's super bad. So I think it's like impossible to overstate the importance of AI safety and alignment work. I would like to see much much more happening.
But I think it's more subtle than most people think. You hear a lot of people talk about AI capabilities and AI alignment as in orthogonal vectors. You're bad if you're a capabilities researcher and you're good if you're an alignment researcher. It actually sounds very reasonable, but they're almost the same thing. Deep learning is just gonna solve all of these problems and so far that's what the progress has been. And progress on capabilities is also what has let us make the systems safer and vice versa surprisingly. So I think none of the sort of sound-bite easy answers work
Connie: Alfred Lynn [?] told me to ask you, and I was going to ask anyway, how far away do you think AGI is? He said Sam will probably tell you sooner than you thought.
Sam: The closer we get, the harder time I have answering because I think that it's going to be much blurrier, and much more of a gradual transition than people think. If you imagine a two-by-two matrix of short timelines until the AGI takeoff era begins and long timelines until it begins and then a slow takeoff or a fast takeoff. The world I think we're heading to and the safest world, the one I most hope for, is the short timeline slow takeoff. But I think people are going to have hugely different opinions about when and where you kind of like declare victory on the AGI thing.
[skipping part]
Question from an audience member: So given your experience with OpenAI safety and the conversation around it, how do you think about safety and other AI fields like autonomous vehicles?
Sam: I think there's like a bunch of safety issues for any new technology and particularly any narrow vertical of AI. We have learned a lot in the past seven or eight decades of technological progress about how to do really good safety engineering and safety systems management. And a lot of that about how we learn how to build safe systems and safe processes will translate, imperfect, there will be mistakes, but we know how to do that.
I think the AGI safety stuff is really different, personally. And worthy of study as its own category. Because the stakes are so high and the irreversible situations are so easy to imagine we do need to somehow treat that differently and figure out a new set of safety processes and standards.
[there's a bit more but it's not about AI safety]
Thank you for sharing! I found these two quotes to be the most interesting (bolding added by me):