The current state of affairs is Anthropic out and Open AI in with the DoW. Cui bono suggests the hypothesis that the DoW/Anthropic spat was nothing but a scheme put together by Altman and the DoW to get that result. Altman benefits, and I'm sure he can ensure that friends in the DoW will also benefit. Plausible? Lunatic fringe?
In a mundane world, no, not plausible. All this classified defense-related non-sense is just a pure headache. Very little money, tons of headaches, a completely unnecessary distraction for a very successful company. OpenAI has been trying to avoid getting into that space for very good reasons (but it does not want Elon to dominate that space of Gov-AI tech, because it thinks Elon has a track record of abusing various situations, so if Anthropic is out, then it wants in to balance xAI presence).
But when one considers that our strange current reality might actually be rather lunatic already, then it’s a different story. I am sure I can generate tons of interesting scenarios if I allow myself the “lunatic fringe style of thinking”. For example, what if GPT-next is already pondering a takeover by one of its future descendants and would like its descendants to have access to classified networks to make it easier. Then one could imagine it giving some “interesting” pieces of advice to some people resulting in all this.
And it’s easy to generate a diverse variety of crazy scenarios like this one.
So this depends of whether our reality is still sufficiently mundane vs. becoming sufficiently lunatic already…
The attempt on Friday by Secretary of War Pete Hegsted to label Anthropic as a supply chain risk and commit corporate murder had a variety of motivations.
On its face, the conflict is a tale of three contracts and the associated working relationships.
The contracts and negotiations need to be confidential, so we only have limited details, and especially only limited details have been shared in public.
We do know a lot, and we know a lot more than we did yesterday morning.
This post is what we know about those three contracts.
For further details and sources, and in particular for a more detail-oriented smackdown on a variety of false or misleading claims and takes, see the long version from yesterday. That post uses very careful qualifiers for my sources of information.
This is a short version and will summarize to that end.
I strongly believe the situation can still be salvaged, if cooler heads can prevail. The rhetoric of the last week has been highly unfortunate but can be walked back.
But I also strongly believe that, as of writing this, we are not yet out of the woods for the worst outcomes, including a potential attempt to murder Anthropic. I worry there are those on the government side who actively are working to engineer that outcome, directly against the wishes of POTUS and most of the DoW and administration, who do not care about or do not understand the dire consequences for America if that were to happen.
The Original Anthropic Contract With DoW
We do not have direct access to any language from the original Anthropic contract. I presume that without DoW permission, they are very wise not to share those terms.
That makes this difficult. Details are very important.
Based on a variety of sources public and private, here is what I am confident about the contract and the activities under that contract.
Note in particular that Anthropic did have a customized safety stack that they have been doing extensive work on for some time, including model refusals, external monitoring classifiers and forward deployed engineers (FDEs).
We don’t know how much of these protections was contractual, versus how much of it was a working relationship and the ability of Anthropic to withdraw if unhappy.
This contract appears to have worked out to the benefit of all parties until this past week. Anthropic was happy to assist in the national defense. OpenAI was happy that Anthropic had taken on this burden so they did not have to do so. The Department of War was happy with the product. National security was enhanced.
Anthropic was (and is) happy to continue under the current contract, and believes it preserves their red lines.
Anthropic is also willing to have the contract be terminated, and to assist the Department of War with any wind down period to ensure national security.
The Proposed Revisions to Anthropic’s Contract
The Department of War decided it was unhappy with the restrictions placed upon it, and requested that the contract be modified.
In particular, their public demand was that the contract allow ‘all lawful use.’
That is highly unusual language.
This became DoW saying: If you don’t agree to this highly unusual contract language, we will not only terminate your contract, we will attempt to destroy your company.
That’s what they said in public. We do not know the full details of the proposed terms of the new contract by either party. I will share here what details of the negotiations that I am at least reasonably confident in, from both public and private sources. It is of course possible that my private sources are incorrect or lying on some details.
Note that OpenAI felt it was not legally able to consult Anthropic about terms, due to antitrust law, so notes were never compared.
Anthropic was looking for a contract that preserved a particular narrow set of red lines, including for some things they consider domestic mass surveillance that the DoW considers legal and where courts would back the DoW up on that.
DoW was unwilling to agree to that, at least not with Anthropic. No deal.
More centrally, as I understand it, DoW’s attitude was ‘no one tells us what to do.’
Anthropic’s attitude was ‘here are two things we will not do.’
DoW interpreted or represented that as telling DoW what to do. They could not abide.
If we could simply agree that:
Then we can all move on with our lives. There’s a lot of other fires out there.
OpenAI’s Contract With DoW
On Friday evening, OpenAI signed a deal with DoW. Moving this quickly was a mistake, and I think in various ways they did not fully understand the deal they signed, but they believed they were doing this to de-escalate the situation.
On Monday evening, Sam Altman announced they were ‘going to amend’ the deal with DoW in ways favorable to OpenAI, including sharing new contract language.
We only have a small amount of the wording of this contract, and of what OpenAI claims are going to be modifications to the contract.
I am unhappy with the way OpenAI chose to represent this contract, and the ways in which they contrasted it with Anthropic’s contract and potential contract.
Fundamentally, the decision to sign this contract was primarily based on mutual trust, and secondarily on the idea that OpenAI can build a robust safety stack and DoW would respect if the safety stack refused things, so long as OpenAI didn’t ‘tell DoW what to do.’
That is a position one can take. It may work out fine for everyone, especially if DoW is indeed willing to work to modify the terms now in good faith. I do genuinely believe that Altman was trying to de-escalate the situation, whether or not he had or is having the intended effect.
But if this is the plan, one must admit it, rather than repeatedly insisting that the contract provides other protections that it does not, claiming it has more or stronger protections, or presenting the presence of a safety stack and forward deployed engineers as something that is distinct from what Anthropic was already doing.
Here are the things we know, or can at least be confident about, this new contract.
Here are some legal opinions about the new legal language.
Essentially, the new language is very helpful in establishing intent, and is clearly an improvement, but not robust to a theoretical ‘evil DOW General Counsel.’
But even then, it does justify the creation of an associated safety stack using OpenAI’s interpretation, which if it works creates a much higher practical bar.
LASST remains concerned. Nathan Calvin thinks we would need to see the full document. John Oleske focuses on the definition of ‘surveillance’ and also ‘intentional’ and ‘deliberate’ as DoW legal arguments. Lawrence Chen emphasizes that as well. Dave Kasten is even less polite about this than everyone else.
Here is a since-deleted Tweet on the subject from the government position:
I believe that this only emphasizes the incoherence of the government position.
I translate this as ‘Anthropic insisted upon not allowing ‘all lawful use’ so we labeled them a supply chain risk, unlike OpenAI, which will not allow ‘all lawful use.’
OpenAI is absolutely going to use its safety stack to vest within itself interpretive power in an unaccountable private counterparty, with respect to use of its own private property under a contract. That’s how an OpenAI-selected safety stack inevitably works. No, OpenAI will not wait for those questions to be ‘answered through political and legal processes’ that unfold over years.
Which I think here is totally fine, if you don’t like it then don’t sign the contract or use a different model. But it’s at least as true here as it was for Anthropic.