They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.
I don't have a great verbalization of why, but want to register that I find this sort of attempted argument kind of horrifying.
Okay, then I can't guess why you find it horrifying, but I'm curious because I think you could be right.
There are realistic beliefs Altman could have about what's good or bad for AI safety that would not allow Zvi to draw that conclusion. For instance:
Overall, the point is that it seems maybe a bit reckless/uncharitable to make strong inferences about someone's rankings of priorities just based on one remark they made being in tension with them pushing in one direction rather than the other in a complicated political struggle.
FWIW, one thing I really didn't like about how he came across in the interview is that he seemed to be engaged in framing the narrative one-sidedly in an underhanded way, sneakily rather than out in the open. (Everyone tries to frame the narrative in some way, but it becomes problematic when people don't point out the places where their interpretation differs from others, because then listeners won't easily realize that there are claims that they still need to evaluate and think about rather than just take for granted and something that everyone else already agrees about.)
He was not highlighting the possibility that the other side's perspective still has validity; instead, he was shrugging that possibility under the carpet. He talked as though (implicitly, not explicitly) it's now officially established or obviously true that the board acted badly (Lex contributed to this by asking easy questions and not pushing back on anything too much). He focused a lot on the support he got during this hard time and people saying good things about him (eulogy while still alive comparison, highlighting that he thinks there's no doubt about his character) and said somewhat condescending things about the former board (about how he thinks they had good intentions, said in that slow voice and thoughtful tone, almost like they had committed a crime) and then emphasized their lack of experience.
For contrast, here are things he could have said that would have made it easier for listeners to come to the right conclusions (I think anyone who is morally scrupulous about whether they're in the right in situations when many others speak up against them would have highlighted these points a lot more, so the absence of these bits in Altman's interview is telling us something.)
(Caveat that I didn't actually listen to the full interview and therefore may have missed it if he did more signposting and perspective taking and "acknowledging that for-him-inconvenient hypotheses are now out there and important if true and hard to dismiss entirely for at the very least the people without private info" than I would've thought from skipping through segments of the interview and Zvi's summary.)
In reaction to what I wrote here, maybe it's a defensible stance to go like, "ah, but that's just Altman being good at PR; it's just bad PR for him to give any air of legitimacy to the former board's concerns."
I concede that, in some cases when someone accuses you of something, they're just playing dirty and your best way to make sure it doesn't stick is by not engaging with low-quality criticism. However, there are also situations where concerns have enough legitimacy that shrugging them under the carpet doesn't help you seem trustworthy. In those cases, I find it extra suspicious when someone shrugs the concerns under the carpet and thereby misses the opportunity to add clarity to the discussion, make themselves more trustworthy, and help people form better views on what's the case.
Maybe that's a high standard, but I'd feel more reassured if the frontier of AI research was steered by someone who could talk about difficult topics and uncertainty around their suitability in a more transparent and illuminating way.
This is great, thanks for filling in that reasoning. I agree that there are lots of plausible reasons Altman could've made that comment, other than disdain for safety.
Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason.
Eh... I think you and him are worried about different things.
Lex really asked all the right questions. I liked how he tried to trick Sam with Ilya and Q*:
It would have been easier for Sam to trip and say something, but he maintained a certain composure, very calm throughout the interview.
Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman’s head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.
Lex Fridman overperformed, asking harder questions than I expected and going deeper than I expected, and succeeded in getting Altman to give a lot of what I believe were genuine answers. The task is ‘get the best interviews you can while still getting interviews’ and this could be close to the production possibilities frontier given Lex’s skill set.
There was not one big thing that stands out given what we already have heard from Altman before. It was more the sum of little things, the opportunity to get a sense of Altman and where his head is at, or at least where he is presenting it as being. To watch him struggle to be as genuine as possible given the circumstances.
One thing that did stand out to me was his characterization of ‘theatrical risk’ as a tactic to dismiss potential loss of human control. I do think that we are underinvesting in preventing loss-of-control scenarios around competitive dynamics that lack bad actors and are far less theatrical than those typically focused on, but the overall characterization here seems like a strategically hostile approach. I am sad about that, whereas I was mostly happy with the rest of the interview.
I will follow my usual format for podcasts of a numbered list, each with a timestamp.
Was that the most valuable use of two hours talking with Altman? No, of course not. Two hours with Dwarkesh Patel would have been far more juicy. But also Altman is friends with Lex and willing to sit down with him, and provide what is still a lot of good content, and will likely do so again. It is an iterated game. So I am very happy for what we did get. You can learn a lot just by watching.