Thank you for the great content! One question - could you please give an example of one of the "fnords" you're referencing, or explain it in slightly more detail? I think I understand what you mean but I've read the blueprint itself, and while the first few pages sound a bit grandstanding and use the word "America" way too much for my taste, I couldn't really see what you're talking about. If I didn't read your comments I'd write the tone off as "well, I guess that's how you're supposed to talk to politicians, what do I know".
Disclaimer: I'm not a native English speaker and have little experience with English (or American :) ) policy documents like this one. Perhaps an explanation would be helpful not just for me but also for other non-American readers.
Table of Contents
Man With a Plan
The primary Man With a Plan this week for government-guided AI prosperity was UK Prime Minister Keir Starmer, with a plan coming primarily from Matt Clifford. I’ll be covering that soon.
Today I will be covering the other Man With a Plan, Sam Altman, as OpenAI offers its Economic Blueprint.
Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor. If a movie had villains talking like this people would say it was too on the nose.
Somehow the actual documents tell that statement to hold its beer.
Oh the Pain
The initial exploratory document is highly disingenuous, trotting out stories of the UK requiring people to walk in front of cars waving red flags and talking about ‘AI’s main street,’ while threatening that if we don’t attract $175 billion in awaiting AI funding it will flow to China-backed projects. They even talk about creating jobs… by building data centers.
The same way some documents scream ‘an AI wrote this,’ others scream ‘the authors of this post are not your friends and are pursuing their book with some mixture of politics-talk and corporate-speak in the most cynical way you can imagine.’
I mean, I get it, playas gonna play, play, play, play, play. But can I ask OpenAI to play with at least some style and grace? To pretend to pretend not to be doing this, a little?
As opposed to actively inserting so many Fnords their document causes physical pain.
The full document starts out in the same vein. Chris Lehane, their Vice President of Global Affairs, writes an introduction as condescending as I can remember, and that plus the ‘where we stand’ repeat the same deeply cynical rhetoric from the summary.
In some sense, it is not important that the way the document is written makes me physically angry and ill in a way I endorse – to the extent that if it doesn’t set off your bullshit detectors and reading it doesn’t cause you pain, then I notice that there is at least some level on which I shouldn’t trust you.
But perhaps that is the most important thing about the document? That it tells you about the people writing it. They are telling you who they are. Believe them.
This is related to the ‘truesight’ that Claude sometimes displays.
As I wrote that, I was only on page 7, and hadn’t even gotten to the actual concrete proposals.
The actual concrete proposals are a distinct issue. I was having trouble reading through to find out what they are because this document filled me with rage and made me physically ill.
It’s important to notice that! I read documents all day, often containing things I do not like. It is very rare that my body responds by going into physical rebellion.
No, the document hasn’t yet mentioned even the possibility of any downside risks at all, let alone existential risks. And that’s pretty terrible on its own. But that’s not even what I’m picking up here, at all. This is something else. Something much worse.
Worst of all, it feels intentional. I can see the Fnords. They want me to see them. They want everyone to implicitly know they are being maximally cynical.
Actual Proposals
All right, so if one pushes through to the second half and the actual ‘solutions’ section, what is being proposed, beyond ‘regulating us would be akin to requiring someone to walk in front of every car waiving a red flag, no literally.’
The top level numbered statements describe what they propose, I attempted to group and separate proposals for better clarity. The nested statements (a, b, etc) are my reactions.
They say the Federal Government should, in a section where they actually say words with meanings rather than filling it with Fnords:
A lot of the components here are things basically everyone should agree upon.
Then there are the parts where, rather than this going hand-in-hand with an attempt to not kill everyone and ensure against catastrophes, attempts to ensure that no one else tries to stop catastrophes or prevent everyone from being killed. Can’t have that.
For AI Builders
They also propose that AI ‘builders’ could:
I mean, sure, those seem good and we should have an antitrust exemption to allow actions like this along with one that allows them to coordinate, slow down or pause in the name of safety if it comes to that, too. Not that this document mentions that.
Think of the Children
Sigh, here we go. Their solutions for thinking of the children are:
Content Identification
And then, I feel like I need to fully quote this one too:
Infrastructure Week
Finally, we get to ‘infrastructure as destiny,’ an area where we mostly agree on what is to actually be done, even if I despise a lot of the rhetoric they’re using to argue for it.
Paying Attention
When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.
There are still two widespread patterns to note within the meaningful content.
The real vision, the thing I will take away most, is in the rhetoric and presentation, combined with the broader goals, rather than the particular details.
OpenAI now actively wants to be seen as pursuing this kind of obviously disingenuous jingoistic and typically openly corrupt rhetoric, to the extent that their statements are physically painful to read – I dealt with much of that around SB 1047, but this document takes that to the next level and beyond.
OpenAI wants no enforced constraints on their behavior, and they want our money.
OpenAI are telling us who they are. I fully believe them.