I think it’s wise to assume Sam’s public projection of short timelines does not reflect private evidence or careful calibration. He’s a known deceiver, with exquisite political instincts, eloquent, and it’s his job to be bullish and keep the money and hype flowing and the talent incoming. One’s analysis of his words should begin with “what reaction is he trying to elicit from people like me, and how is he doing it?”
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam's family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
The implication is that you absolutely can't take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don't have a public track record of manipulation like Altman.
the only way to appropriately address [long term risk from AI systems of incredible capabilities] is to ship product and learn.
The frustrating thing is that in some ways this is exactly right (humanity is okay at resolving problems iff we get frequent feedback) and in other ways exactly wrong (one major argument for AI doom is that you can't learn from the feedback of having destroyed the world).
This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There’s a bunch of good and interesting answers in the interview about past events that I won’t mention or have to condense a lot here, such as his going over his calendar and all the meetings he constantly has, so consider reading the whole thing.
Table of Contents
The Battle of the Board
Here is what he says about the Battle of the Board in Reflections:
This is about as good a statement as one could expect Altman to make. I strongly disagree that this resulted in a stronger system of governance for OpenAI. And I think he has a much better idea of what happened than he is letting on, and there are several points where ‘I see what you did there.’ But mostly I do appreciate what this statement aims to do.
From his interview, we also get this excellent statement:
And this, which I can’t argue with:
It is fair to say that ultimately, the structure as a non-profit did not work for Altman.
This also seems like the best place to highlight his excellent response about Elon Musk:
So far, so good.
Altman Lashes Out
Then we get Altman being less polite.
This is where his statements fail to line up with my understanding of what happened. Altman gave the board repeated in-public drop dead deadlines, including demanding that the entire board resign as he noted above, with very clear public messaging that failure to do this would destroy OpenAI.
Maybe if Altman had quickly turned around and blamed the public actions on his allies acting on their own, I would have believed that, but he isn’t even trying that line out now. He’s pretending that none of that was part of the story.
In response to those ultimatums, facing imminent collapse and unable to meet Altman’s blow-it-all-up deadlines and conditions, the board tapped Emmett Shear as a temporary CEO, who was very willing to facilitate Altman’s return and then stepped aside only days later.
That wasn’t deception, and Altman damn well knows that now, even if he was somehow blinded to what was happening at the time. The board very much still had the intention of bringing Altman back. Altman and his allies responded by threatening to blow up the company within days.
Inconsistently Candid
Then the interviewer asks what the board meant by ‘consistently candid.’ He talks about the ChatGPT launch which I mention a bit later on – where I do think he failed to properly inform the board but I think that was more one time of many than a particular problem – and then Altman says, bold is mine:
There it is. They were ‘not happy’ with the way that he tried to get them off the board. I thank him for the candor that he was indeed trying to remove not only Helen Toner but various board members.
I do think this was primary. Why were they not happy, Altman? What did you do?
From what we know, it seems likely he lied to board members about each other in order to engineer a board majority.
Altman also outright says this:
That seems very clearly false. By all accounts, however much farther than sneaky Altman did or did not go, Altman was absolutely being sneaky.
He also later mentions the issues with the OpenAI startup fund, where his explanation seems at best rather disingenuous and dare I say it sneaky.
On Various People Leaving OpenAI
Here is how he attempts to address all the high profile departures:
I agree that some of it was unavoidable and inevitable. I do not think this addresses people’s main concerns, especially that they have lost so many of their highest level people, especially over the last year, including almost all of their high-level safety researchers all the way up to the cofounder level.
The Pitch
It is related to this claim, which I found a bit disingenuous:
I agree that was a powerful pitch.
But we know from the leaked documents, and we know from many people’s reports, that this was not the entire pitch. The pitch for OpenAI was that AGI would be built safely, and that Google DeepMind could not to be trusted to be the first to do so. The pitch was that they would ensure that AGI benefited the world, that it was a non-profit, that it cared deeply about safety.
Many of those who left have said that these elements were key reasons they chose to join OpenAI. Altman is now trying to rewrite history to ignore these promises, and pretend that the vision was ‘build AGI/ASI’ rather than ‘build AGI/ASI safety and ensure it benefits humanity.’
Great Expectations
I also found his ‘I expected ChatGPT to go well right from the start’ interesting. If Altman did expect it do well and in his words he ‘forced’ people to ship it when they didn’t want to because they thought it wasn’t ready, that provides different color than the traditional story.
It also plays into this from the interview:
It sounds like Altman is claiming he did think it was going to be a big deal, although of course no one expected the rocket to the moon that we got.
Accusations of Fake News
Then he says how much of a mess the Battle of the Board left in its wake:
Some combination of Altman and his allies clearly worked hard to successfully spread fake news during the crisis, placing it in multiple major media outlets, in order to influence the narrative and the ultimate resolution. A lot of this involved publicly threatening (and bluffing) that if they did not get unconditional surrender within deadlines on the order of a day, they would end OpenAI.
Meanwhile, the Board made the fatal mistake of not telling its side of the story, out of some combination of legal and other fears and concerns, and not wanting to ultimately destroy OpenAI. Then, at Altman’s insistence, those involved left. And then Altman swept the entire ‘investigation’ under the rug permanently.
Altman then has the audacity now to turn around and complain about what little the board said and leaked afterwards, calling it ‘fake news’ without details, and saying how they f***ed him and the company and were ‘gone and now he had to clean up the mess.’
OpenAI’s Vision Would Pose an Existential Risk To Humanity
What does he actually say about safety and existential risk in Reflections? Only this:
Then in the interview, he gets asked point blank:
I know that anyone who previously had a self-identified ‘Eliezer Yudkowsky fan fiction Twitter account’ knows better than to think all you can say about long term risks is ‘ship products and learn.’
I don’t see the actions to back up even these words. Nor would I expect, if they truly believed this, for this short generic statement to be the only mention of the subject.
How can you reflect on the past nine years, say you have a direct path to AGI (as he will say later on), get asked point blank about the risks, and say only this about the risks involved? The silence is deafening.
I also flat out do not think you can solve the problems exclusively through this approach. The iterative development strategy has its safety and adaptation advantages. It also has disadvantages, driving the race forward and making too many people not notice what is happening in front of them via a ‘boiling the frog’ issue. On net, my guess is it has been net good for safety versus not doing it, at least up until this point.
That doesn’t mean you can solve the problem of alignment of superintelligent systems primarily by reacting to problems you observe in present systems. I do not believe the problems we are about to face will work that way.
And even if we are in such a fortunate world that they do work that way? We have not been given reason to trust that OpenAI is serious about it.
Getting back to the whole ‘vision thing’:
I suppose if ‘vision’ is simply ‘build AGI/ASI’ and everything else is tactics, then sure?
I do not think that was the entirety of the original vision, although it was part of it.
That is indeed the entire vision now. And they’re claiming they know how to do it.
Those who have ears, listen. This is what they plan on doing.
They are predicting AI workers ‘joining the workforce’ in earnest this year, with full AGI not far in the future, followed shortly by ASI. They think ‘4’ is conservative.
What are the rest of us going to do, or not do, about this?
I can’t help but notice Altman is trying to turn OpenAI into a normal company.
Why should we trust that structure in the very situation Altman himself describes? If the basic thesis is that we should put our trust in Altman personally, why does he think he has earned that trust?