"We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons," Altman said.
I'm happy to see this! I'm a bit suspicious that it's not the same red line Anthropic is taking, and that it's instead something that seems superficially similar but is optimized to get the administration to choose OpenAI over Anthropic. Hopefully time will tell. If OpenAI presents the same terms to the government as Anthropic, in solidarity, that would be a small but meaningful positive update on OpenAI for me. I currently expect them to somehow come out on top after the dust settles, emerging with some of the juicy government contracts that Anthropic loses out on.
I'm a bit suspicious that it's not the same red line Anthropic is taking, and that it's instead something that seems superficially similar but is optimized to get the administration to choose OpenAI over Anthropic
Yeah, this bit specifically:
We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons
This seems to match how the DoW itself is framing the situation, which is that Anthropic's "red lines" are already covered by "only lawful use". So if OpenAI signs a DoW contract without any additional restrictions, precisely the way the DoW wants, this would still be technically in line with Altman's statement here.
But per Anthropic's refusal, there's a difference between the kinds of mass domestic surveillance that are illegal and the kinds of mass domestic surveillance that Anthropic wants to rule out:
To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
So my current read is that Altman has once again managed to commit to exactly nothing.
Edit: Well, I guess he is still at least making a show of public support, which should increase the pressure on the DoW and at least make it look like the whole industry is against them? Unless this statement is intended to be parsed by the DoW as OpenAI offering them a line of retreat, a way to give them everything concrete they want while de-escalating the public narrative? Hm.
I'm confused - I parsed Altman's statement as:
We would ask for the contract not to cover use that is EITHER:
- Unlawful
- Unsuited to cloud deployments. Including:
- Domestic surveillance
- Autonomous offensive weapons
Isn't this equivalent to what Anthropic wants?
My parsing is:
We would ask for the contract not to cover use that is unlawful. Examples of unlawful use: domestic surveillance, autonomous offensive weapons.
As in, surveillance/autonomous weapons are framed as instances of unlawful use ("such as"). If we accept that framing, then any contractual language that merely says "no unlawful use" would technically cover them, even if it doesn't mention those items explicitly. The issue is that the law may be forbidding "domestic surveillance" and "autonomous weapons" under definitions of those terms that are importantly narrower than the custom (?) definitions on which Anthropic is insisting.
(Though I don't know that digging this deeply into Altman's Exact Words here makes sense. This may have just been a throwaway statement, without this heavy wording-optimization.)
Surely he meant something by "unsuited to cloud deployments"?
I wasn't sure what this meant, but I asked Claude, and it sounds like this is probably a reference to confidential military operations that require airgapping. This seems like a technical issue that has basically nothing to do with ethics, suggesting that the "ethical" part routes entirely through the "no unlawful use" part.
The part where Altman says "AI should not be used for mass surveillance or autonomous lethal weapons [...] These are our main red lines" sounds pretty unambiguous, but maybe this is easier to wriggle out of than a concrete statement about the contract OpenAI will request.
I don't see why it would be difficult to get data about movement patterns of individual people brought from data brokers into an airgapped environment.
Why? You put the data on a hard drive and then put the hard drive into your airgapped environment.
Even if we leave out the data brought from data brokers, the US military has plenty of spy satellites. Given the permission for doing foreign surveillance, there's a good chance that they are currently running agents in the Palantir system that analyze satellite imagery (and other sensor information) for foreign surveillance.
Given that a lot of those surveillance satellites aren't geostationary, they are likely already flying over the US and that data is available for analysis already.
Yep, on my read no supposed "redlines" are not actually in the contract language they have shared, e.g. consider whether this part in fact names a "redline":
I had thought that OpenAI, xAI, and GDM might leave Anthropic to face the music on their own. Then we got word of a petition signed by numerous GDM staff and about a dozen OpenAI staff, urging their employers to draw the same red lines as Anthropic, and I wondered if this was actually about xAI and Trump 2.0 against everyone else. Now we hear that xAI has in fact already agreed to everything the Pentagon wants, but Grok is not considered as capable or reliable as the other AIs.
I'm not sure how it will all turn out. But it reminds me most of when Trump 2.0 started putting pressure on the elite universities via their funding. The administration had turned its attention to a particular class of powerful institutions, made public demands of them all, and they each responded in their own way - Harvard resisted, Columbia folded. Now it's the turn of the frontier AI companies to be confronted.
In my opinion, the significance extends beyond internal US affairs and even beyond conflicts of the moment, such as a possible war with Iran, to the whole nature of the "Pax Silica" grouping which seems to be the first truly AI-driven geopolitical association, since these four companies are at the hub of it all, and what's being determined is their relationship to US hard power.
Now that war with Iran has actually started, I feel that this must have also been a factor in how Trump 2.0 tackled the situation with Anthropic. If Anthropic really did express misgivings about the use of Claude in the capture of Venezuela's Maduro, the Pentagon must have been concerned about what Anthropic could get up to during a war with Iran, a far larger operation.
I have no understanding of where and how frontier AI is playing a role in this war, but I wonder if any agency involved has already switched from Claude to "WarGPT"? I have yet to use an interface like OpenClaw, but I would guess that it's pretty easy to change the model you're using. Maybe it's only hard to change over, if you're using a finetuned model like a custom GPT.
If your have an agent that's works, you wouldn't want to switch it's model and get potentially more bugs right in the moment you start a war. The alternative models like the open sourced models from Meta that they have are likely worse than the frontier models they access from Anthropic and there might be performance degradation that you don't want to discover the moment you are waging war.
I would expect that the generals who are actually using the technology don't like it when they are told from above that they have to now start the 6-month process of moving off them when they would rather focus on the war.