The most prominent remaining lab is Google. Google focuses on AI’s upside. The vibes aren’t great, but they’re not toxic. The key asks for their ‘pro-innovation’ approach are:
Coordinated policy at all levels for transmission, energy and permitting. Yes.
‘Balanced’ export controls, meaning scale back the restrictions a bit on cloud compute in particular and actually execute properly, but full details TBD, they plan to offer their final asks here by May 15. I’m willing to listen.
‘Continued’ funding for AI R&D, public-private partnerships. Release government data sets, give startups cash, and bankroll our CBRN-risk research. Ok I guess?
‘Pro-innovation federal policy frameworks’ that preempt the states, in particular ‘state-level laws that affect frontier models.’ Again, a request for a total free pass.
‘Balanced’ copyright law meaning full access to anything they want, ‘without impacting rights holders.’ The rights holders don’t see it that way. Google’s wording here opens the possibility of compensation, and doesn’t threaten that we would lose to China if they don’t get their way, so there’s that.
‘Balanced privacy laws that recognize exemptions for publicly available information will avoid inadvertent conflicts with AI or copyright standards, or other impediments to the development of AI systems.’ They do still want to protect ‘personally identifying data’ and protect it from ‘malicious actors’ (are they here in the room with us right now?) but mostly they want a pass here too.
Expedited review of the validity of AI-related patents upon request. Bad vibes around the way they are selling it, but the core idea seems good, this seems like a case where someone is actually trying to solve real problems. I approve.
‘Emphasize focused, sector-specific, and risk-based AI governance and standards.’ Et tu, Google? You are going to go with this use-based regulatory nightmare? I would have thought Google would be better than trying to invoke the nightmare of distinct rules for every different application, which does not deal with the real dangers but does cause giant pains in the ass.
A call for ‘workforce development’ programs, which as I noted for OpenAI are usually well-intentioned and almost always massive boondoggles. Incorporating AI into K-12 education is of course vital but don’t make a Federal case out of it.
Federal government adaptation of AI, including in security and cybersecurity. This is necessary and a lot of the details here seem quite good.
‘Championing market-driven and widely adopted technical standards and security protocols for frontier models, building on the Commerce Department’s leading role with the International Organization for Standardization’ and ‘Working with industry and aligned countries to develop tailored protocols and standards to identify and address potential national security risks of frontier AI systems.’ They are treating a few catastrophic risks (CBRN in particular) as real, although the document neglects to mention anything beyond that. They want clear indications of who is responsible for what and clear standards to meet, which seems fair. They also want full immunity for ‘misuse’ by customers or end users, which seems far less fair when presented in this kind of absolute way. I’m fine with letting users shoot themselves in the foot but this goes well beyond that.
Ensuring American AI has access to foreign markets via trade agreements. Essentially, make sure no one else tries to regulate anything or stop us from dying, either.
This is mostly Ordinary Decent Corporate Lobbying. Some of it is good and benefits from their expertise, some is not so good, some is attempting regulatory capture, same as it ever was.
The problem is that AI poses existential risks and is going to transform our entire way of life even if things go well, and Google is suggesting strategies that don’t take any of that into account at all. So I would say that overall, I am modestly disappointed, but not making any major updates.
It is a tragedy that Google makes very good AI models, then cripples them by being overly restrictive in places where there is no harm, in ways that only hurt Google’s reputation, while being mostly unhelpful around the actually important existential risks. It doesn’t have to be this way, but I see no signs that Demis can steer the ship on these fronts and make things change.
Another Note on OpenAI’s Suggestions
John Pressman has a follow-up thread explaining why he thought OpenAI’s thread exceeded his expectations. I can understand why one could have expected something worse than what we got, and he asks good questions about the relationship between various parts of OpenAI – a classic mistake is not realizing that companies are made of individuals and those individuals are often at cross-purposes. I do think this is the best steelman I’ve seen, so I’ll quote it at length.
John Pressman: It’s more like “well the entire Trump administration seems to be based on vice signaling so”.
Do I like the framing? No. But concretely it basically seems to say “if we want to beat China we should beef up our export controls *on China*, stop signaling to our allies that we plan to subjugate them, and build more datacenters” which is broad strokes Correct?
“We should be working to convince our allies to use AI to advance Western democratic values instead of an authoritarian vision from the CCP” isn’t the worst thing you could say to a group of vice signaling jingoists who basically demand similar from petitioners.
… [hold this thought]
More important than what the OpenAI comment says is what it doesn’t say: How exactly we should be handling “recipe for ruin” type scenarios, let alone rogue superintelligent reinforcement learners. Lehane seems happy to let these leave the narrative.
I mostly agree with *what is there*, I’m not sure I mostly agree with what’s not there so to speak. Even the China stuff is like…yeah fearmongering about DeepSeek is lame, on the other hand it is genuinely the case that the CCP is a scary institution that likes coercing people.
The more interesting thing is that it’s not clear to me what Lehane is saying is even in agreement with the other stated positions/staff consensus of OpenAI. I’d really like to know what’s going on here org chart wise.
Thinking about it further it’s less that I would give OpenAI’s comment a 4/5 (let alone a 5/5), and more like I was expecting a 1/5 or 0/5 and instead read something more like 3/5: Thoroughly mediocre but technically satisfies the prompt. Not exactly a ringing endorsement.
We agree about what is missing. There are two disagreements about what is there.
The potential concrete disagreement is over OpenAI’s concrete asks, which I think are self-interested overreaches in several places. It’s not clear to what extent he sees them as overreaches versus being justified underneath the rhetoric.
The other disagreement is over the vice signaling. He is saying (as I understand it) that the assignment was to vice signal, of course you have to vice signal, so you can’t dock them for vice signaling. And my response is a combination of ‘no, it still counts as vice signaling, you still pay the price and you still don’t do it’ and also ‘maybe you had to do some amount of vice signaling but MY LORD NOT LIKE THAT.’ OpenAI sent a strong, costly and credible vice signal and that is important evidence to notice and also the act of sending it changes them.
By contrast: Google’s submission is what you’d expect from someone who ‘understood the assignment’ and wasn’t trying to be especially virtuous, but was not Obviously Evil. Anthropic’s reaction is someone trying to do better than that while strategically biting their tongue, and of course MIRI’s would be someone politely not doing that.
On Not Doing Bad Things
I think this is related to the statement I skipped over, which was directed at me, and I’ll include my response from the thread, and I want to be clear I think John is doing his best and saying what he actually believes here and I don’t mean to single him out but this is a persistent pattern that I think causes a lot of damage:
John Pressman: Anyway given you think that we’re all going to die basically, it’s not like you get to say “that person over there is very biased but I am a neutral observer”, any adherence to the truth on your part in this situation would be like telling the axe murderer where the victim is.
Zvi Mowshowitz: I don’t know how to engage with your repeated claims that people who believe [X] would obviously then do [Y], no matter the track record of [~Y] and advocacy of [~Y] and explanation of [~Y] and why [Y] would not help with the consequences of [X].
This particular [Y] is lying, but there have been other values of [Y] as well. And, well, seriously, WTF am I supposed to do with that, I don’t know how to send or explain costlier signals than are already being sent.
I don’t really have an ask, I just want to flag how insanely frustrating this is and that it de facto makes it impossible to engage and that’s sad because it’s clear you have unique insights into some things, whereas if I was as you assume I am I wouldn’t have quoted you at all.
I think this actually is related to one of our two disagreements about the OP from OpenAI – you think that vice signaling to those who demand vice signaling is good because it works, and I’m saying no, you still don’t do it, and if you do then that’s still who you are.
The other values of [Y] he has asserted, in other places, have included a wide range of both [thing that would never work and is also pretty horrible] and [preference that John thinks follows from [X] but where we strongly think the opposite and have repeatedly told him and others this and explained why].
And again, I’m laying this out because he’s not alone. I believe he’s doing it in unusually good faith and is mistaken, whereas mostly this class of statement is rolled out as a very disingenuous rhetorical attack.
The short version of why the various non-virtuous [Y] strategies wouldn’t work is:
The FDT or virtue ethics answer. The problems are complicated on all levels. The type of person who would [Y] in pursuit of [~X] can’t even figure out to expect [X] to happen by default, let alone think well enough to figure out what [Z] to pursue (via [Y] or [~Y]), in order to accomplish [~X]. The whole rationality movement was created exactly because if you can’t think well in general and have very high epistemic standards, you can’t think well about AI, either, and you need to do that.
The CDT or utilitarian answer. Even if you knew the [Z] to aim for, this is an iterated, complicated social game, where we need to make what to many key decision makers look like extraordinary claims, and ask for actions to be taken based on chains of logic, without waiting for things to blow up in everyone’s face first and muddling through afterwards, like humanity normally does it. Employing various [Y] to those ends, even a little, let alone on the level of say politicians, will inevitably and predictably backfire. And indeed, in those few cases where someone notably broke this rule, it did massively backfire.
Is it possible that at some point in the future, we will have a one-shot situation actually akin to Kant’s ax murderer, where we know exactly the one thing that matters most and a deceptive path to it, and then have a more interesting question? Indeed do many things come to pass. But that is at least quite a ways off, and my hope is to be the type of person who would still try very hard not to pull that trigger.
The even shorter version is:
The type of person who can think well enough to realize to do it, won’t do it.
Even if you did it anyway, it wouldn’t work, and we realize this.
Corporations Are Multiple People
Here is the other notable defense of OpenAI, which is to notice what John was pointing to, that OpenAI contains multitudes.
Shakeel: I really, really struggle to see how OpenAI’s suggestions to the White House on AI policy are at all compatible with the company recently saying that “our models are on the cusp of being able to meaningfully help novices create known biological threats”.
Just an utterly shameful document. Lots of OpenAI employees still follow me; I’d love to know how you feel about your colleagues telling the government that this is all that needs to be done! (My DMs are open.)
Roon: the document mentions CBRN risk. openai has to do the hard work of actually dealing with the White House and figuring out whatever the hell they’re going to be receptive to
Shakeel: I think you are being way too charitable here — it’s notable that Google and Anthropic both made much more significant suggestions. Based on everything I’ve heard/seen, I think your policy team (Lehane in particular) just have very different views and aims to you!
“maybe the biggest risk is missing out”? Cmon.
Lehane (OpenAI, in charge of the document): Maybe the biggest risk here is actually missing out on the opportunity. There was a pretty significant vibe shift when people became more aware and educated on this technology and what it means.
Roon: yeah that’s possible.
Richard Ngo: honestly I think “different views” is actually a bit too charitable. the default for people who self-select into PR-type work is to optimize for influence without even trying to have consistent object-level beliefs (especially about big “sci-fi” topics like AGI)
Hollywood Offers Google and OpenAI Some Suggestions
You can imagine how the creatives reacted to proposals to invalidate copyright without any sign of compensation.
Chris Morris (Fast Company): A who’s who of musicians, actors, directors, and more have teamed up to sound the alarm as AI leaders including OpenAI and Google argue that they shouldn’t have to pay copyright holders for AI training material.
…
Included among the prominent signatures on the letter were Paul McCartney, Cynthia Erivo, Cate Blanchett, Phoebe Waller-Bridge, Bette Midler, Cate Blanchett, Paul Simon, Ben Stiller, Aubrey Plaza, Ron Howard, Taika Waititi, Ayo Edebiri, Joseph Gordon-Levitt, Janelle Monáe, Rian Johnson, Paul Giamatti, Maggie Gyllenhaal, Alfonso Cuarón, Olivia Wilde, Judd Apatow, Chris Rock, and Mark Ruffalo.
…
“It is clear that Google . . . and OpenAI . . . are arguing for a special government exemption so they can freely exploit America’s creative and knowledge industries, despite their substantial revenues and available funds.”
No surprises there. If anything, that was unexpectedly polite.
I would perhaps be slightly concerned about pissing off the people most responsible for the world’s creative content (and especially Aubrey Plaza), but hey. That’s just me.
Institute for Progress Offers Suggestions
I’ve definitely been curious where these folks would land. Could have gone either way.
I am once again disappointed to see the framing as Americans versus authoritarians, although in a calm and sane fashion. They do call for investment in ‘reliability and security’ but only because they recognize, and on the basis of, the fact that reliability and security are (necessary for) capability. Which is fine to the extent it gets the job done, I suppose. But the complete failure to consider existential or catastrophic risks, other than authoritarianism, is deeply disappointing.
They offer six areas of focus.
Making it easier to build AI data centers and associated energy infrastructure. Essentially everyone agrees on this, it’s a question of execution, they offer details.
Supporting American open-source AI leadership. They open this section with ‘some models… will need to be kept secure from adversaries.’ So there’s that, in theory we could all be on the same page on this, if more of the advocates of open models could also stop being anarchists and face physical reality. The IFP argument for why it must be America that ‘dominates open source AI’ is the danger of backdoors, but yes it is rather impossible to get an enduring ‘lead’ in open models because all your open models are, well, open. They admit this is rather tricky.
The first basic policy suggestion here is to help American open models git gud via reliability, but how is that something the government can help with?
They throw out the idea of prizes for American open models, but again I notice I am puzzled by how exactly this would supposedly work out.
They want to host American open models on NAIRR, so essentially offering subsidized compute to the ‘little guy’? I pretty much roll my eyes, but shrug.
Launch R&D moonshots to solve AI reliability and security. I strongly agree that it would be good if we could indeed do this in even a modestly reasonable way, as in a fraction of the money turns into useful marginal spending. Ambitious investments in hardware security, a moonshot for AI-driven formally verified software and a ‘grand challenge’ for interpretability, would be highly welcome, as would a pilot for a highly secure data center. Of course, the AI labs are massively underinvesting in this even purely from a selfish perspective.
Build state capacity to evaluate the national security capabilities and implications of US and adversary models. This is important. I think their recommendation on AISI is making a tactical error. It is emphasizing the dangers of AISI following things like the ‘risk management framework’ and thus playing into the hands of those who would dismantle AISI, which I know is not what they want. AISI is already focused on what IFP is referring to as ‘security risks’ combined with potential existential dangers, and emphasizing that is what is most important. AISI is under threat mostly because MAGA people, and Cruz in particular, are under the impression that it is something that it is not.
Attracting and retaining superstar AI talent. Absolutely. They mention EB-1A, EB-2 and O-3, which I hadn’t considered. Such asks are tricky because obviously we should be allowing as much high skill immigration as we can across the board, especially from our rivals, except you’re pitching the Trump Administration.
Improving export control policies and enforcement capacity. They suggest making export exceptions for chips with proper security features that guard against smuggling and misuse. Sounds great to me if implemented well. And they also want to control high-performance inference chips and properly fund BIS, again I don’t have any problem with that.
Going item by item, I don’t agree with everything and think there are some tactical mistakes, but that’s a pretty good list. I see what IFP is presumably trying to do, to sneak useful-for-existential-risk proposals in because they would be good ideas anyway, without mentioning the additional benefits. I totally get that, and my own write-up did a bunch in this direction too, so I get it even if I think they took it too far.
Suggestion Boxed In
This was a frustrating exercise for everyone writing suggestions. Everyone had to balance between saying what needs to be said, versus saying it in a way that would cause the administration to listen.
How everyone responded to that challenge tells you a lot about who they are.
Last week I covered Anthropic’s relatively strong submission, and OpenAI’s toxic submission. This week I cover several other submissions, and do some follow-up on OpenAI’s entry.
Google Also Has Suggestions
The most prominent remaining lab is Google. Google focuses on AI’s upside. The vibes aren’t great, but they’re not toxic. The key asks for their ‘pro-innovation’ approach are:
This is mostly Ordinary Decent Corporate Lobbying. Some of it is good and benefits from their expertise, some is not so good, some is attempting regulatory capture, same as it ever was.
The problem is that AI poses existential risks and is going to transform our entire way of life even if things go well, and Google is suggesting strategies that don’t take any of that into account at all. So I would say that overall, I am modestly disappointed, but not making any major updates.
It is a tragedy that Google makes very good AI models, then cripples them by being overly restrictive in places where there is no harm, in ways that only hurt Google’s reputation, while being mostly unhelpful around the actually important existential risks. It doesn’t have to be this way, but I see no signs that Demis can steer the ship on these fronts and make things change.
Another Note on OpenAI’s Suggestions
John Pressman has a follow-up thread explaining why he thought OpenAI’s thread exceeded his expectations. I can understand why one could have expected something worse than what we got, and he asks good questions about the relationship between various parts of OpenAI – a classic mistake is not realizing that companies are made of individuals and those individuals are often at cross-purposes. I do think this is the best steelman I’ve seen, so I’ll quote it at length.
We agree about what is missing. There are two disagreements about what is there.
The potential concrete disagreement is over OpenAI’s concrete asks, which I think are self-interested overreaches in several places. It’s not clear to what extent he sees them as overreaches versus being justified underneath the rhetoric.
The other disagreement is over the vice signaling. He is saying (as I understand it) that the assignment was to vice signal, of course you have to vice signal, so you can’t dock them for vice signaling. And my response is a combination of ‘no, it still counts as vice signaling, you still pay the price and you still don’t do it’ and also ‘maybe you had to do some amount of vice signaling but MY LORD NOT LIKE THAT.’ OpenAI sent a strong, costly and credible vice signal and that is important evidence to notice and also the act of sending it changes them.
By contrast: Google’s submission is what you’d expect from someone who ‘understood the assignment’ and wasn’t trying to be especially virtuous, but was not Obviously Evil. Anthropic’s reaction is someone trying to do better than that while strategically biting their tongue, and of course MIRI’s would be someone politely not doing that.
On Not Doing Bad Things
I think this is related to the statement I skipped over, which was directed at me, and I’ll include my response from the thread, and I want to be clear I think John is doing his best and saying what he actually believes here and I don’t mean to single him out but this is a persistent pattern that I think causes a lot of damage:
The other values of [Y] he has asserted, in other places, have included a wide range of both [thing that would never work and is also pretty horrible] and [preference that John thinks follows from [X] but where we strongly think the opposite and have repeatedly told him and others this and explained why].
And again, I’m laying this out because he’s not alone. I believe he’s doing it in unusually good faith and is mistaken, whereas mostly this class of statement is rolled out as a very disingenuous rhetorical attack.
The short version of why the various non-virtuous [Y] strategies wouldn’t work is:
Is it possible that at some point in the future, we will have a one-shot situation actually akin to Kant’s ax murderer, where we know exactly the one thing that matters most and a deceptive path to it, and then have a more interesting question? Indeed do many things come to pass. But that is at least quite a ways off, and my hope is to be the type of person who would still try very hard not to pull that trigger.
The even shorter version is:
Corporations Are Multiple People
Here is the other notable defense of OpenAI, which is to notice what John was pointing to, that OpenAI contains multitudes.
Hollywood Offers Google and OpenAI Some Suggestions
You can imagine how the creatives reacted to proposals to invalidate copyright without any sign of compensation.
No surprises there. If anything, that was unexpectedly polite.
I would perhaps be slightly concerned about pissing off the people most responsible for the world’s creative content (and especially Aubrey Plaza), but hey. That’s just me.
Institute for Progress Offers Suggestions
I’ve definitely been curious where these folks would land. Could have gone either way.
I am once again disappointed to see the framing as Americans versus authoritarians, although in a calm and sane fashion. They do call for investment in ‘reliability and security’ but only because they recognize, and on the basis of, the fact that reliability and security are (necessary for) capability. Which is fine to the extent it gets the job done, I suppose. But the complete failure to consider existential or catastrophic risks, other than authoritarianism, is deeply disappointing.
They offer six areas of focus.
Going item by item, I don’t agree with everything and think there are some tactical mistakes, but that’s a pretty good list. I see what IFP is presumably trying to do, to sneak useful-for-existential-risk proposals in because they would be good ideas anyway, without mentioning the additional benefits. I totally get that, and my own write-up did a bunch in this direction too, so I get it even if I think they took it too far.
Suggestion Boxed In
This was a frustrating exercise for everyone writing suggestions. Everyone had to balance between saying what needs to be said, versus saying it in a way that would cause the administration to listen.
How everyone responded to that challenge tells you a lot about who they are.