So, my take is that the long term public benefit trust probably has the 3/5 board members now since they have raised over $6 billion dollars.
Here is the definition of the "Final Phase-In Date":
(VIII)(6)(iv) "Final Phase-In Date" means the earlier to occur of (l) the close of business on May 24, 2027 or (Il) eight months following the date on which the Board of Directors determines that the Corporation has achieved $6 billion in Total Funds Raised;
From Google:
As of November 2024, Anthropic, an artificial intelligence (AI) startup, has raised a total of $13.7 billionin venture capital: [1, 2, 3]
Anthropic has a valuation of over $40 billion. The startup is in discussions to raise more funding at that valuation. [2, 3, 4]
[1] https://www.nytimes.com/2024/02/20/technology/anthropic-funding-ai.html
[3] https://www.instagram.com/tradedvc/p/DCr7_gnPUp-/
[4] https://www.youtube.com/watch?v=EtmOgUqQe7s
[5] https://www.forbes.com/profile/daniela-amodei/
Just a collection of other thoughts:
Also:
I feel like the introduction is written around trying to position the document positively with regulators.
I'm quite interested in what led to this approach and what parts of the company were involved with writing the document this way. The original version had some of this - but it wasn't as forward - and didn't feel as polished in this regard.
Open with Positive Framing
As frontier AI models advance, we believe they will bring about transformative benefits for our society and economy. AI could accelerate scientific discoveries, revolutionize healthcare, enhance our education system, and create entirely new domains for human creativity and innovation.
Emphasize Anthropic's Leadership
In September 2023, we released our Responsible Scaling Policy (RSP), a first-of-its-kind public commitment
Emphasize Importance of Not Overregulating
This policy reflects our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable.
Emphasize Innovation (Again, Don't Overregulate)
By implementing safeguards that are proportional to the nature and extent of an AI model’s risks, we can balance innovation with safety, maintaining rigorous protections without unnecessarily hindering progress.
Emphasize Anthropic's Leadership (Again) / Industry Self-Regulation
To demonstrate that it is possible to balance innovation with safety, we must put forward our proof of concept: a pragmatic, flexible, and scalable approach to risk governance. By sharing our approach externally, we aim to set a new industry standard that encourages widespread adoption of similar frameworks.
Don't Regulate Now (Again)
In the long term, we hope that our policy may oer relevant insights for regulation. In the meantime, we will continue to share our findings with policymakers.
We Care About Other Things You Care About (like Misinformation)
Our Usage Policy sets forth our standards for the use of our products, including prohibitions on using our models to spread misinformation, incite violence or hateful behavior, or engage in fraudulent or abusive practices
I feel like there are two things going on here:
But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations.
The idea that Anthropic or OpenAI or DeepMind should get to decide, on their own, the appropriate safety and security measures for frontier models, seems unrealistic. It's going to end up being a set of regulations created by a government body - and Anthropic is probably better off participating in that process than trying to oppose its operation at the start.
I feel like some of this just comes from an unrealistic view of the future, where they don't seem to understand that as AGI approaches, in certain respects they become less influential and important and not more influential and important - as AI ceases to be a niche thing, other power structures in society will exert more influence on its operation and distribution,
I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership.
Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.
Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.
The Trust Agreement bit sounds like it makes sense to me.
Other thoughts:
Useful Documents