Two subtle aspects of the latest OpenAI announcement, https://openai.com/index/openai-board-forms-safety-and-security-committee/.
A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.
So what they are saying is that just sharing adopted recommendations on safety and security might itself be hazardous. And so they'll share an update publicly, but that update would not necessarily disclose the full set of adopted recommendations.
OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.
What remains unclear is whether this is a "roughly GPT-5-level model", or whether they already have a "GPT-5-level model" for their internal use and this is their first "post-GPT-5 model".
Scott Alexander wrote a very interesting post covering the details of the political fight around SB 1047 a few days ago: https://www.astralcodexten.com/p/sb-1047-our-side-of-the-story
I've learned a lot of things new to me reading it (which is remarkable given how much material related to SB 1047 I have seen before)