I feel like there are two things going on here:
But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations.
The idea that Anthropic or OpenAI or DeepMind should get to decide, on their own, the appropriate safety and security measures for frontier models, seems unrealistic. It's going to end up being a set of regulations created by a government body - and Anthropic is probably better off participating in that process than trying to oppose its operation at the start.
I feel like some of this just comes from an unrealistic view of the future, where they don't seem to understand that as AGI approaches, in certain respects they become less influential and important and not more influential and important - as AI ceases to be a niche thing, other power structures in society will exert more influence on its operation and distribution,
I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership.
Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.
Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.
Just a collection of other thoughts:
Also:
I feel like the introduction is written around trying to position the document positively with regulators.
I'm quite interested in what led to this approach and what parts of the company were involved with writing the document this way. The original version had some of this - but it wasn't as forward - and didn't feel as polished in this regard.
Open with Positive Framing
Emphasize Anthropic's Leadership
Emphasize Importance of Not Overregulating
Emphasize Innovation (Again, Don't Overregulate)
Emphasize Anthropic's Leadership (Again) / Industry Self-Regulation
Don't Regulate Now (Again)
We Care About Other Things You Care About (like Misinformation)