Basically just the title, see the OAI blog post for more details.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
EDIT:
Also, Greg Brockman is stepping down from his board seat:
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
The remaining board members are:
OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
EDIT 2:
Sam Altman tweeted the following.
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
will have more to say about what’s next later.
![]()
Greg Brockman has also resigned.
How surprising is this to the alignment community professionals (e.g. people at MIRI, Redwood Research, or similar)? From an outside view, the volatility/flexibility and movement away from pure growth and commercialization seems unexpected and could be to alignment researchers' benefit (although it's difficult to see the repercussions at this point). While it is surprising to me because I don't know the inner workings of OpenAI, I'm surprised that it seems similarly surprising to the LW/alignment community as well.
Perhaps the insiders are still digesting and formulating a response, or want to keep hot takes to themselves for other reasons. If not, I'm curious if there is actually so little information flowing between alignment communities and companies like OpenAI such that this would be as surprising as it is to an outsider. For example, there seems to be many people at Anthropic that are directly in or culturally aligned with LW/rationality, and I expected the same to be true to a lesser extend for OpenAI.
I understood there was a real distance between groups, but still, I had a more connected model in my head that is challenged by this news and the response in the first day.
I'm at CHAI and it's shocking to me, but I'm not the most plugged-in person.