Campbell Hutcheson

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I worried this was a loophole: "the Trust Agreement also authorizes the Trust to be enforced by the company and by groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time." An independent person told me it's a normal Delaware law thing and it's only relevant if the Trust breaks the rules. Yay! This is good news, but I haven't verified it and I'm still somewhat suspicious (but this is mostly on me, not Anthropic, to figure out).

 

The Trust Agreement bit sounds like it makes sense to me. 

Other thoughts:

  • I don't know how the shareholders can call a vote to undo the Class T shares once the LTBT appoints its 3 directors? This might be something in the bylaws - I talked to ChatGPT a bit about it and that sounds like where the procedural rules around shareholder voting are likely to be.
  • I noticed that the Class T shares appear to be able to be wrapped up into the Voting Agreement. I'm a bit curious whether they have been and, if so, whether it gives anyone the power to direct how Class T shares vote.
  • I read this John Morley article and I noticed that he wrote: "like the directors of the company, the Trustees must use their powers to ensure the company combines its pursuit of profit with the achievement of the company’s mission to develop and maintain advanced AI for the long-term benefit of humanity"
    • I'd be interested in knowing what the Trust Agreement says about "Trustees must use their powers to ensure the company combines its pursuit of profit..." - do the Trustees have an affirmative duty to ensure that the company turns a profit?
  • I was also curious as to why the trust was a "purpose trust" and not a "charitable purpose trust" - whether they didn't think it would qualify as charitable, etc...

Useful Documents

  • Trust Documents
  • Anthropic, PBC Bylaws
  • Amended and Restated Voting Agreement

So, my take is that the long term public benefit trust probably has the 3/5 board members now since they have raised over $6 billion dollars.

Here is the definition of the "Final Phase-In Date":

(VIII)(6)(iv) "Final Phase-In Date" means the earlier to occur of (l) the close of business on May 24, 2027 or (Il) eight months following the date on which the Board of Directors determines that the Corporation has achieved $6 billion in Total Funds Raised;

From Google:

As of November 2024, Anthropic, an artificial intelligence (AI) startup, has raised a total of $13.7 billionin venture capital: [1, 2, 3]

  • 2021: $124 million from investors including Jaan Tallinn and the Center for Emerging Risk Research [1]
  • 2022: $580 million from Sam Bankman-Fried and his colleagues [1]
  • May 2023: $450 million from investors including Google and Salesforce [1]
  • August 2023: $100 million from two Asian telecoms [1]
  • Amazon: $8 billion, with a $1.25 billion first installment in September 2023 and a $2.75 billion extension in March 2024 [1, 2]

Anthropic has a valuation of over $40 billion. The startup is in discussions to raise more funding at that valuation. [2, 3, 4]

[1] https://www.nytimes.com/2024/02/20/technology/anthropic-funding-ai.html

[2] https://techcrunch.com/2024/11/22/anthropic-raises-an-additional-4b-from-amazon-makes-aws-its-primary-cloud-partner/

[3] https://www.instagram.com/tradedvc/p/DCr7_gnPUp-/

[4] https://www.youtube.com/watch?v=EtmOgUqQe7s

[5] https://www.forbes.com/profile/daniela-amodei/


 



 

Just a collection of other thoughts:

  • Why did Anthropic decide that deciding not to classify the new model as ASL-3 is a CEO / RSO decision rather than a board of directors or LTBT decision? Both of those would be more independent.
    • My guess is that it's because the feeling was that the LTBT would either have insufficient knowledge or would be too slow; it would be interesting to get confirmation though.
    • Haven't gotten to how the RSO is chosen but if the RSO is appointed by the CEO / Board then I think there are insufficient checks and balances; RSO should be on a 3 year non-renewable, non-terminable contract basis or something similar.
  • The document doesn't feel portable because it feels very centered around Anthropic and the transition from ASL-2 to ASL-3. It just doesn't feel like something that someone meant to be portable. In fact, it feels more like a high-level commentary on the ASL-2 to ASL-3 transition at Anthropic. The original RSP felt more like something that could have been cleaned up into an industry standard (OAI's original preparedness framework does a better job with this honestly).
  • The reference to existing security frameworks is helpful but it just seems like a grab bag (the reference to SOC2 seems sort of out of place, for instance; NIST 800-53 should be a much higher standard? also, if SOC2, why not ISO 27001?)
  • I think they removed the requirement to define ASL-4 before training an ASL-3 model?

Also:

I feel like the introduction is written around trying to position the document positively with regulators. 

I'm quite interested in what led to this approach and what parts of the company were involved with writing the document this way.  The original version had some of this - but it wasn't as forward - and didn't feel as polished in this regard. 

Open with Positive Framing 

As frontier AI models advance, we believe they will bring about transformative benefits for our society and economy. AI could accelerate scientific discoveries, revolutionize healthcare, enhance our education system, and create entirely new domains for human creativity and innovation.

Emphasize Anthropic's Leadership

In September 2023, we released our Responsible Scaling Policy (RSP), a first-of-its-kind public commitment

Emphasize Importance of Not Overregulating

This policy reflects our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable.

Emphasize Innovation (Again, Don't Overregulate)

By implementing safeguards that are proportional to the nature and extent of an AI model’s risks, we can balance innovation with safety, maintaining rigorous protections without unnecessarily hindering progress.

Emphasize Anthropic's Leadership (Again) / Industry Self-Regulation 

To demonstrate that it is possible to balance innovation with safety, we must put forward our proof of concept: a pragmatic, flexible, and scalable approach to risk governance. By sharing our approach externally, we aim to set a new industry standard that encourages widespread adoption of similar frameworks.

Don't Regulate Now (Again)

In the long term, we hope that our policy may oer relevant insights for regulation. In the meantime, we will continue to share our findings with policymakers.

We Care About Other Things You Care About (like Misinformation)

Our Usage Policy sets forth our standards for the use of our products, including prohibitions on using our models to spread misinformation, incite violence or hateful behavior, or engage in fraudulent or abusive practices

I feel like there are two things going on here:

  • Anthropic considers itself the expert on AI safety and security and believes that it can develop better SSPs than the California government.
  • Anthropic thinks that the California government is too political and does not have the expertise to effectively regulate frontier labs.

But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations. 

The idea that Anthropic or OpenAI or DeepMind should get to decide, on their own, the appropriate safety and security measures for frontier models, seems unrealistic. It's going to end up being a set of regulations created by a government body - and Anthropic is probably better off participating in that process than trying to oppose its operation at the start.

I feel like some of this just comes from an unrealistic view of the future, where they don't seem to understand that as AGI approaches, in certain respects they become less influential and important and not more influential and important - as AI ceases to be a niche thing, other power structures in society will exert more influence on its operation and distribution,

I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership. 

Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.

Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.