OpenAI currently creates a massive amount of value for humanity and by default should be defended tooth and nail.
Interesting how perspectives differ on this. From what I see around me, if tomorrow a lightning from God destroyed all AI technology, there'd be singing and dancing in the streets.
What do they have against AI? Seems like the impact on regular people has been pretty minimal. Also, if GPT4 level technology ws allowed to fully mature and diffuse to a wide audience without increasing in base capability, it seems like the impact on everyone would be hugely beneficial
They think (correctly) that AI will take away many jobs, and that AI companies care only about money and aren't doing anything to prevent or mitigate job loss.
In my non-tech circles people mostly complain about AI stealing jobs from artists, companies making money off of other people's work, etc.
People are also just scared of losing their own jobs.
Some people have strong negative priors toward AI in general.
When the GPT-3 API first came out, I built a little chatbot program to show my friends/family. Two people (out of maybe 15) flat out refused to put in a message because they just didn't like the idea of talking to an AI.
I think it's more of an instinctual reaction than something thought through. There's probably a deeper psychological explanation, but I don't want to speculate.
The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.
I read this as saying to investors: "don't worry: even if the new board's review process does determine I should no longer be CEO and I leave, the company will continue to run very well". I don't think he thinks that's a likely outcome of the review, but what matters is what investors might expect.
Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.
I'm struggling to interpret this, so your guesses as to what this might mean would be helpful. It seems he clearly wanted to come back - is he threatening to leave again if he doesn't get his way?
Also note Ilya not included in the leadership team.
While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.
This statement also really stood out to me - if there really was no ill will, why would they have to discuss how Ilya can continue his work? Clearly there's something more going on here. Sounds like Ilya's getting the knife.
Also, his statements in the verge are so bizarre to me:
"SA: I learned that the company can truly function without me, and that’s a very nice thing. I’m very happy to be back, don’t get me wrong on that. But I come back without any of the stress of, “Oh man, I got to do this, or the company needs me or whatever.” I selfishly feel good because either I picked great leaders or I mentored them well. It’s very nice to feel like the company will be totally fine without me, and the team is ready and has leveled up."
2 business days away and the company is ready to blow up if you don't come back and your takeaway is that it can function without you? I get that this is PR spin, but usually there's at least some amount of plausible believability.
Maybe these are all attempts to signal to investors that everything is fine, even if Sam were to leave it would still all be fine, but at some point if I'm an investor I have to wonder if given how hard Sam is trying to make it look like everything is fine, that things are very much not fine.
As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before.
Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation, nothing here should surprise you.
Still seems worthwhile to gather the postscripts, official statements and reactions into their own post for future ease of reference.
What will the ultimate result be? We likely only find that out gradually over time, as we await both the investigation and the composition and behaviors of the new board.
I do not believe Q* played a substantive roll in events, so it is not included here. I also do not include discussion here of how good or bad Altman has been for safety.
Sam Altman’s Statement
Here is the official OpenAI statement from Sam Altman. He was magnanimous towards all, the classy and also smart move no matter the underlying facts. As he has throughout, he has let others spread hostility, work the press narrative and shape public reaction, while he himself almost entirely offers positivity and praise. Smart.
Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.
That means that whatever caused the board to fire Altman, whether or not Altman forced the board’s hand to varying degrees, if everyone involved had chosen to continue without Altman then OpenAI would have been fine. We can choose to believe or not believe Altman’s claims in his Verge interview that he only considered returning after the board called him on Saturday, and we can speculate on what Altman otherwise did behind the scenes during that time. We don’t know. We can of course guess, but we do not know.
He then talks about his priorities.
Research, then product, then board. Such statements cannot be relied upon, but this was as good as such a statement can be. We must keep watch and see if such promises are kept. What will the new board look like? Will there indeed be a robust independent investigation into what happened? Will Ilya and Jan Leike be given the resources and support they need for OpenAI’s safety efforts?
Altman gave an interview to The Verge. Like the board, he (I believe wisely and honorably) sidesteps all questions about what caused the fight with the board and looks forward to the inquiry. In Altman’s telling, it was not his idea to come back, instead he got a call Saturday morning from some of the board asking him about potentially coming back.
He says he is not focused on getting back on the board, that is not his focus, but that the governance structure clearly has a problem that will take a while to fix.
Yes. It is good to see this highly reasonable timeline and expectations setting, as opposed to the previous tactics involving artificial deadlines and crises.
Mutari confirms in the interview that OpenAI’s safety approach is not changing, that this had nothing to do with safety.
Altman also made a good statement about Adam D’Angelo’s potential conflicts of interest, saying he actively wants customer representation on the board and is excited to work with him again. Altman also spent several hours with D’Angelo.
Bret Taylor’s Statement
We also have the statement from Bret Taylor. We know little about him, so reading his first official statement carefully seems wise.
Mostly this is Brad Taylor properly playing the role of chairman of the board, which tells us little other than that he knows the role well, which we already knew.
Microsoft will get only an observer on the board, other investors presumably will not get seats either. That is good news, matching reporting from The Information.
What does ‘enhance the governance structure’ mean here? We do not know. It could be exactly what we need, it could be a rubber stamp, it could be anything else. We do not know what the central result will be.
The statement on a review of recent events here is weaker than I would like. It raises the probability that the new board does not get or share a true explanation.
He mentions safety multiple times. Based on what I know about Taylor, my guess is he is unfamiliar with such questions, and does not actually know what that means in context, or what the stakes truly are. Not that he is dismissive or skeptical, rather that he is encountering all this for the first time.
Larry Summers’s Statement
Here is the announcement via Twitter from board member Larry Summers, which raises the bar in having exactly zero content. So we still know very little here.
Helen Toner’s Statement
Here is Helen Toner’s full Twitter statement upon resigning from the board.
Many outraged people continue to demand clarity on why the board fired Altman. I believe that most of them are thrilled that Toner and others continue not to share the details, and are allowing the situation outside the board to return to the status quo ante.
There will supposedly be an independent investigation. Until then, I believe we have a relatively clear picture of what happened. Toner’s statement hints at some additional details.
OpenAI Needs a Strong Board That Can Fire Its CEO
Roon gets it. The board needs to keep its big red button going forward, but still must account for its actions if it wants that button to stick.
The danger is that if we are not careful, we will learn the wrong lessons.
Quite so. From our perspective, the board botched its execution and its members made relatively easy rhetorical targets. That is true even if the board had good reasons for doing so. If the board had not botched its execution and had more gravitas? I think things go differently.
If after an investigation, Summers, D’Angelo and Taylor all decide to fire Altman again (note that I very much do not expect this, but if they did decide to do it), I assure you they will handle this very differently, and I would predict a very different outcome.
One of the best things about Sam Altman is his frankness that we should not trust him. Most untrustworthy people say the other thing. Same thing with Altman’s often very good statements about existential risk and the need for safety. When people bring clarity and are being helpful, we should strive to reward that, not hold it against them.
I also agree with Andrew Critch here, that it was good and right for the board to pull the plug on a false signal of supervision. If the CEO makes the board unable to supervise them, or otherwise moves against the board, then it is the duty of the board to bring things to a head, even if there are no other issues present.
Good background, potentially influential in the thinking of several board members including Helen Toner: Former OpenAI board member Holden Karnofsky’s old explanation of why and exactly how Nonprofit Boards are Weird, and how best to handle it.
Some Board Member Candidates
Eliezer Yudkowsky proposes Paul Graham for the board of OpenAI. I see the argument, especially because Graham clearly cares a lot about his kids. My worries are that he would be too steerable by Altman, and he would be too inclined to view OpenAI as essentially a traditional business, and let that overrule other questions even if he knew it shouldn’t.
If he was counted as an Altman ally, as he presumably should, then he’s great. On top of the benefits to OpenAI, it would provide valuable insider information to Graham. Eliezer clarifies that his motivation is that he gives Graham a good chance of figuring out a true thing when it matters, which also sounds right.
Emmett Shear also seems like a clearly great consensus pick.
One concern is that the optics of the board matter. You would be highly unwise to choose a set of nine white guys. See Taylor’s statement about the need for diverse perspectives.
A Question of Valuation
Matt Levine covers developments since Tuesday, especially that the valuation of OpenAI in its upcoming sale did not change, as private markets can stubbornly refuse to move their prices. In my model, private valuations like this are rather arbitrary, and based on what social story everyone involved can tell and everyone’s relative negotiating position, and what will generate the right momentum for the company, rather than a fair estimate of value. Also everyone involved is highly underinvested or overinvested, has no idea what fair value actually is, and mostly wants some form of social validation so they don’t feel too cheated on price. Thus, often investors get away with absurdly low prices, other times they get tricked into very high ones.
Gary Marcus says OpenAI was never worth $86 billion. I not only disagree, I would (oh boy is this not investment advice!) happily invest at $86 billion right now if I had that ability (which I don’t) and thought that was an ethical thing to do. Grok very much does not ‘replicate most of’ GPT-4, the model is instead holding up quite well considering how long they sat on it initially.
OpenAI is nothing without its people. That does not mean they lack all manner of secret sauce. In valuation terms I am bullish. Would the valuation have survived without Altman? No, but in the counterfactual scenario where Altman was stepping aside due to health issues with an orderly succession, I would definitely have thought $86 billion remained cheap.
A Question of Optics
A key question in all this is the extent to which the board’s mistake was that its optics were bad. So here is a great example of Paul Graham advocating for excellent principles.
Bad optics can cause bad things to happen. So can claims that the optics are bad, or worries that others will think the optics are bad, or claims that you are generally bad at optics.
You have two responses.
Consider the options in light of recent events. We all want it to be one way. Often it is the other way.