Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements.
They don't actually say "all" and I haven't seen anyone confirm that all employees received this email. It seems possible (and perhaps likely) to me that many high profile safety people did not receive this email, especially since it would presumably be in Sam's interest to do so, and since I haven't seen them claiming otherwise. And we wouldn't know: those who are still under the contract can't say anything. If OpenAI only sent an email to some former employees then they can come away with headlines like "OpenAI releases former staffers from agreement" which is true, without giving away their whole hand. Perhaps I'm being too pessimistic, but I am under the impression that we're dealing with a quite adversarial player, and until I see hard evidence otherwise this is what I'm assuming.
CNBC reports:
The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” stated the memo, which was viewed by CNBC.
The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed.
“As we shared with employees, we are making important updates to our departure process,” an OpenAI spokesperson told CNBC in a statement.
“We have not and never will take away vested equity, even when people didn’t sign the departure documents. We’ll remove nondisparagement clauses from our standard departure paperwork, and we’ll release former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual,” said the statement, adding that former employees would be informed of this as well.
A handful of former employees have publicly confirmed that they received the email.
Asya: is the above sufficient to allay the suspicion you described? If not, what kind of evidence are you looking for (that we might realistically expect to get)?
Prerat: Everyone should have a canary page on their website that says “I’m not under a secret NDA that I can’t even mention exists” and then if you have to sign one you take down the page.
Does this work? Sounds like a good idea.
While I am not a lawyer, it appears that this concept might indeed hold some merit. A similar strategy is used by organizations focused on civil rights, known as a “warrant canary”. Essentially, it’s a method by which a communications service provider aims to implicitly inform its users that the provider has been served with a government subpoena, despite legal prohibitions on revealing the existence of the subpoena. The idea behind it is that it there are very strong protections against compelled speech, especially against compelled untrue speech (e.g. updating the canary despite having received a subpoena).
The Electronic Frontier Foundation (EFF) seems to believe that warrant canaries are legal.
I feel like these would be more effective if standardized, dated and updated. Should we also mention gag orders? Something like this?
As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.
As of June 2024, I am not under any kind of gag order whose existence I cannot mention.
Last updated June 2024. I commit to updating at least annually.
Could LessWrong itself be compelled even if the user cannot? Should we include PGP signatures or something?
The wording of that canary is perhaps less precise and less broad than you wanted it to be in many possible worlds. Given obvious possible inferences one could reasonably make from the linguistic pragmatics - and what's left out - you are potentially passively representing a(n overly?) polarized set of possible worlds you claim to maybe live in, and may not have thought about the full ramifications of that.
That's fair that mine's not that precise. I've copied Habryka's one instead. (My old one is in a footnote for posterity[1].)
Non-disclosure agreements I have signed: Around 2017 I signed an NDA when visiting the London DeepMind offices for lunch, one covering sharing any research secrets, that was required by all guests before we were allowed me access to the building. I do not believe I have ever signed another NDA (nor a non-disparagement agreement).
Is anyone else shocked that no one before Daniel refused to sign?
I guess I shouldn't be coming to this conclusion in 2024 but holy cow are people greedy.
That seems like a rather uncharitable take. Even if you're mad at the company, would you (at least (~falsely) assuming this all may indeed be standard practice and not as scandalous as it turned out to be) really be willing to pay millions of dollars for the right to e.g. say more critical things on Twitter, that in most cases extremely few people will even care about? I'm not sure if greed is the best framing here.
(Of course the situation is a bit different for AI safety researchers in particular, but even then, there's not that much actual AI (safety) related intel that even Daniel was able to share that the world really needs to know about; most of the criticism OpenAI is dealing with now is on this meta NDA/equity level)
As a trust fund baby who likes to think I care about the future of humanity, I can confidently say that I would at least consider it, though I'd probably take the money.
It seems increasingly plausible that it would be in the public interest to ban non-disparagement clauses more generally going forward, or at least set limits on scope and length (although I think nullifying existing contracts is bad and the government should not do that and shouldn’t have done it for non-competes either.)
I concur.
It should be noted though; we can spend all day taking apart these contracts and applying pressure publicly but real change will have to come from the courts. I await an official judgment to see the direction of this issue. Arguably, the outcome there is more important for any alignment initiative run by a company than technical goals (at the moment).
How do you reconcile keeping genuine cognito hazards away from the public, while also maintaining accountability & employee health? Is there a middle ground that justifies the existence of NDAs & NDCs?
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
2 months ago, Zvi wrote:
I too like Sam Altman, his writing and the way he often communicates. I would add I am a big fan of his medical and fusion efforts. He has engaged for real with the ideas that I consider most important, even if he has a different option I know he takes the concerns seriously. Most of all, I would emphasize: He strives for what he believes is good.
Zvi also previously praised Altman for the effect Altman had on AI safety and his understanding of it. I'd be interested in a retrospective on what went wrong with Zvi's evaluation process.
Followed immediately by:
I too also have very strong concerns that we are putting a person whose highest stats are political maneuvering and deception, who is very high in power seeking, into this position. By all reports, you cannot trust what this man tells you.
Yes, but Zvi's earlier posts were more positive about Altman. I just picked a relatively recent post, written after the board fired him.
Multi Voiced AI narration of this post. Every unique quoted person gets their own voice to distinguish them:
https://askwhocastsai.substack.com/p/openai-fallout-by-zvi-mowshowitz
Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett Johansson
We have learned more since last week. It’s worse than we knew.
How much worse? In which ways? With what exceptions?
That’s what this post is about.
The Story So Far
For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses.
No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out.
Here is Altman’s statement from May 18, with its new community note.
Evidence strongly suggests the above post was, shall we say, ‘not consistently candid.’
The linked article includes a document dump and other revelations, which I cover.
Then there are the other recent matters.
Ilya Sutskever and Jan Leike, the top two safety researchers at OpenAI, resigned, part of an ongoing pattern of top safety researchers leaving OpenAI. The team they led, Superalignment, had been publicly promised 20% of secured compute going forward, but that commitment was not honored. Jan Leike expressed concerns that OpenAI was not on track to be ready for even the next generation of models needs for safety.
OpenAI created the Sky voice for GPT-4o, which evoked consistent reactions that it sounded like Scarlett Johansson, who voiced the AI in the movie Her, Altman’s favorite movie. Altman asked her twice to lend her voice to ChatGPT. Altman tweeted ‘her.’ Half the articles about GPT-4o mentioned Her as a model. OpenAI executives continue to claim that this was all a coincidence, but have taken down the Sky voice.
(Also six months ago the board tried to fire Sam Altman and failed, and all that.)
A Note on Documents from OpenAI
The source for the documents from OpenAI that are discussed here, and the communications between OpenAI and its employees and ex-employees, is Kelsey Piper in Vox, unless otherwise stated.
She went above and beyond, and shares screenshots of the documents. For superior readability and searchability, I have converted those images to text.
Some Good News But There is a Catch
OpenAI has indeed made a large positive step. They say they are releasing former employees from their nondisparagement agreements and promising not to cancel vested equity under any circumstances.
Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements.
And we have this confirmation from Andrew Carr.
Tanner Lund: Is this legally binding?
I notice they are also including the non-solicitation provisions as not enforced.
(Note that certain key people, like Dario Amodei, plausibly negotiated two-way agreements, which would mean theirs would still apply. I would encourage anyone in that category who is now free of the clause, even if they have no desire to disparage OpenAI, to simply say ‘I am under no legal obligation not to disparage OpenAI.’)
These actions by OpenAI are helpful. They are necessary.
They are not sufficient.
First, the statement is not legally binding, as I understand it, without execution of a new agreement. No consideration was given, and this is not so formal, and it is unclear whether the statement author has authority in the matter.
Even if it was binding as written, it says they do not ‘intend’ to enforce. Companies can change their minds, or claim to change them, when circumstances change.
It also does not mention the ace in the hole, which is the ability to deny access to tender offers, or other potential retaliation by Altman or OpenAI. Until an employee has fully sold their equity, they are still in a bind. Even afterwards, a company with this reputation cannot be trusted to not find other ways to retaliate.
Nor does it mention the clause of right to repurchase for ‘fair market value’ that OpenAI claims it has the right to do, noting that their official ‘fair market value’ of shares is $0. Altman’s statement does not mention this at all, including the possibility it has already happened.
I mean, yeah, I also would in many senses like to see them try that one, but this does not give ex-employees much comfort.
Then there is the problem of taking responsibility. OpenAI is at best downplaying what happened. Certain statements sure look like lies. To fully set things right, one must admit responsibility. Truth and reconciliation requires truth.
Here is Kelsey with the polite version.
If I were an ex-employee, no matter what else I would do, I would absolutely sell my equity at the next available tender opportunity. Why risk it?
Indeed, here is a great explanation of the practical questions at play. If you want to fully make it right, and give employees felt freedom to speak up, you have to mean it.
Jacob Hinton is giving every benefit of the doubt to OpenAI here. Yet he notices that the chilling effects will be large.
Was this an own goal? Kelsey initially thought it was, then it is explained why the situation is not so clear cut as that.
There are big advantages to being generally seen as highly vindictive, as a bad actor willing to do bad things if you do not get your way. Often that causes people to proactively give you what you want and avoid threatening your interests, with no need to do anything explicit. Many think this is how one gets power, and that one should side with power and with those who act in such fashion.
There also is quite a lot of value in controlling the narrative, and having leverage over those close to you, that people look to for evidence, and keeping that invisible.
What looks like a mistake could be a well-considered strategy, and perhaps quite a good bet. Most companies that use such agreements do not have them revealed. If it was not for Daniel, would not the strategy still be working today?
And to state the obvious: If Sam Altman and OpenAI lacked any such leverage in November, and everyone had been free to speak their minds, does it not seem plausible (or if you include the board, rather probable) that the board’s firing of Altman would have stuck without destroying the company, as ex-employees (and board members) revealed ways in which Altman had been ‘not consistently candid’?
How Blatant Was This Threat?
Oh my.
It does not get more explicit than that.
I do appreciate the bluntness and honest here, of skipping the nominal consideration.
It Sure Looks Like Executives Knew What Was Going On
What looks the most implausible are claims that the executives did not know what was going on regarding the exit agreements and legal tactics until February 2024.
Kelsey Piper’s Vox article is brutal on this, and brings the receipts. The ultra-restrictive NDA, with its very clear and explicit language of what is going on, is signed by COO Brad Lightcap. The notices that one must sign it are signed by (now departed) OpenAI VP of people Diane Yoon. The incorporation documents that include extraordinary clawback provisions are signed by Sam Altman.
There is also the question of how this language got into the exit agreements in the first place, and also the corporate documents, if the executives were not in the loop. This was not a ‘normal’ type of clause, the kind of thing lawyers sneak in without consulting you, even if you do not read the documents you are signing.
Pressure Tactics Continued Through the End of April 2024
OpenAI claims they noticed the problem in February, and began updating in April.
Kelsey Piper showed language of this type in documents as recently as April 29, 2024, signed by OpenAI COO Brad Lightcap.
The documents in question, presented as standard exit ‘release of claims’ documents that everyone signs, include extensive lifetime non disparagement clauses, an NDA that covers revealing the existence of either the NDA or the non disparagement clause, and a non-interference clause.
Here is what it looked like for someone to finally decline to sign.
Some potential ambiguity, huh. What a nice way of putting it.
Even if we accepted on its face the claim that this was unintentional and unknown to management until February, which I find highly implausible at best, that is no excuse.
Again, even if you are somehow telling the truth here, what about after the catch?
Two months is more than enough time to stop using these pressure tactics, and to offer ‘clarification’ to employees. I would think it was also more than enough time to update the documents in question, if OpenAI intended to do that.
They only acknowledged the issue, and only stopped continuing to act this way, after the reporting broke. After that, the ‘clarifications’ came quickly. Then, as far as we can tell, the actually executed new agreements and binding contracts will come never. Does never work for you?
The Right to an Attorney
Here we have OpenAI’s lawyer refusing to extend a unilaterally imposed seven day deadline to sign the exit documents, discouraging the ex-employee from consulting with an attorney.
I had the opportunity to talk to someone whose job involves writing up and executing employment agreements of the type used here by OpenAI. They reached out, before knowing about Kelsey Piper’s article, specifically because they wanted to make the case that what OpenAI did was mostly standard practice. They generally attempted, prior to reading that article, to make the claim that what OpenAI did was within the realm of acceptable practice. If you get equity you should expect to sign a non-disparagement clause, and they explicitly said they would be surprised if Anthropic was not doing it as well.
They did not think that ‘release of claims’ being then interpreted by OpenAI as ‘you can never say anything bad about us ever for any reason or tell anyone that you agreed to this’ was also fair game.
Their argument was that if you sign something like that without talking to a lawyer first that is on you. You have opened the door to any clause. Never mind what happens when you raise objections and consult lawyers during onboarding at a place like OpenAI, it would be unheard of for a company to treat that as a red flag or rescind your offer.
That is very much a corporate lawyer’s view of what is wise and unwise paranoia, and what is and is not acceptable practice.
Even that lawyer said that a 7 day exploding period was highly unusual, and that it was seriously not fine. A 21 day exploding period is not atypical for an exploding contract in general, but that gives time for a lawyer to be consulted. Confining to a week is seriously messed up.
It also is not what the original contract said, which was that you had 60 days. As Kelsey Piper points out, no you cannot spring a 7 day period on someone when the original contract said 60.
Nor was it a threat they honored when called on it, they always extended, with this as an example:
And they very clearly tried to discourage ex-employees from consulting a lawyer.
Even if all of it is technically legal, there is no version of this that isn’t scummy as hell.
The Tender Offer Ace in the Hole
Control over tender offers means that ultimately anyone with OpenAI equity, who wants to use that equity for anything any time soon (or before AGI comes around) is going to need OpenAI’s permission. OpenAI very intentionally makes that conditional, and holds it over everyone as a threat.
When employees pushed back on the threat to cancel their equity, Kelsey Piper reports that OpenAI instead changed to threatening to withhold participation in future tenders. Without participation in tenders, shares cannot be sold, making them of limited practical value. OpenAI is unlikely to pay dividends for a long time.
In other words, if you ever violate any ‘applicable company policies,’ or realistically if you do anything we sufficiently like, or we want to retain our leverage over you, we won’t let you sell your shares.
This makes sense, given the original threat is on shaky legal ground and actually invoking it would give the game away even if OpenAI won.
No matter what other leverage they are giving up under pressure, the ace stays put.
‘Regardless of where they work’ is very much not ‘regardless of what they have signed’ or ‘whether they are playing nice with OpenAI.’ If they wanted to send a different impression, they could have done that.
The Old Board Speaks
The answer to that is, presumably, the article in the Economist by Helen Toner and Tasha McCauley, former AI board members. Helen says they mostly wrote this before the events of the last few weeks, which checks with what I know about deadlines.
The content is not the friendliest, but unfortunately, even now, the statements continue to be non-specific. Toner and McCauley sure seem like they are holding back.
We also know they are holding back because there are specific things we can be confident happened that informed the board’s actions, that are not mentioned here. For details, see my previous write-ups of what happened.
To state the obvious, if you stand by your decision to remove Altman, you should not allow him to return. When that happened, you were two of the four board members.
It is certainly a reasonable position to say that the reaction to Altman’s removal, given the way it was handled, meant that the decision to attempt to remove him was in error. Do not come at the king if you are going to miss, or the damage to the kingdom would be too great.
But then you don’t stand by it. What one could reasonably say is, if we still had the old board, and all of this new information came to light on top of what was already known, and there was no pending tender offer, and you had your communications ducks in a row, then you would absolutely fire Altman.
Indeed, it would be a highly reasonable decision, now, for the new board to fire Altman a second time based on all this, with better communications and its new gravitas. That is now up to the new board.
OpenAI Did Not Honor Its Public Commitments to Superalignment
OpenAI famously promised 20% of its currently secured compute for its superalignment efforts. That was not a lot of their expected compute budget given growth in compute, but it sounded damn good, and was substantial in practice.
Fortune magazine reports that OpenAI never delivered the promised compute.
This is a big deal.
OpenAI made one loud, costly and highly public explicit commitment to real safety.
That promise was a lie.
You could argue that ‘the claim was subject to interpretation’ in terms of what 20% meant or that it was free to mostly be given out in year four, but I think this is Obvious Nonsense.
It was very clearly either within their power to honor that commitment, or they knew at the time of the commitment that they could not honor it.
OpenAI has not admitted that they did this, offered an explanation, or promised to make it right. They have provided no alternative means of working towards the goal.
This was certainly one topic on which Sam Altman was, shall we say, ‘not consistently candid.’
Indeed, we now know many things the board could have pointed to on that, in addition to any issues involving Altman’s attempts to take control of the board.
This is a consistent pattern of deception.
The obvious question is: Why? Why make a commitment like this then dishonor it?
Who is going to be impressed by the initial statement, and not then realize what happened when you broke the deal?
Indeed, if you think no one can check or will find out, then it could be a good move. You make promises you can’t keep, then alter the deal and tell people to pray you do not alter it any further.
That’s why all the legal restrictions on talking are so important. Not this fact in particular, but that one’s actions and communications change radically when you believe you can bully everyone into not talking.
Even Roon, he of ‘Sam Altman did nothing wrong’ in most contexts, realizes those NDA and non disparagement agreements are messed up.
It is the last two sentences where we disagree. I sincerely hope I am wrong there.
OpenAI Messed With Scarlett Johansson
The Washington Post reported a particular way they did not mess with her.
The story also has some details about ‘building the personality’ of ChatGPT for voice and hardcoding in some particular responses, such as if it was asked to be the user’s girlfriend.
Jang no doubt can differentiate Sky and Johansson under the ‘pictures of Joe Biden eating sandwiches’ rule, after spending months on this. Of course you can find differences. But to say that the two sound nothing alike is absurd, especially when so many people doubtless told her otherwise.
As I covered last time, if you do a casting call for 400 voice actors who are between 25 and 45, and pick the one most naturally similar to your target, that is already quite a lot of selection. No, they likely did not explicitly tell Sky’s voice actress to imitate anyone, and it is plausible she did not do it on her own either. Perhaps this really is her straight up natural voice. That doesn’t mean they didn’t look for and find a deeply similar voice.
Even if we take everyone in that post’s word for all of that, that would not mean, in the full context, that they are off the hook, based on my legal understanding, or my view of the ethics. I strongly disagree with those who say we ‘owe OpenAI an apology,’ unless at minimum we specifically accused OpenAI of the things OpenAI is reported as not doing.
Remember, in addition to all the ways we know OpenAI tried to get or evoke Scarlett Johansson, OpenAI had a policy explicitly saying that voices should be checked for similarity against major celebrities, and they have said highly implausible things repeatedly on this subject.
Another OpenAI Employee Leaves
Gretchen Krueger resigned from OpenAI on May 14th, and thanks to OpenAI’s new policies, she can say some things. So she does, pointing out that OpenAI’s failures to take responsibility run the full gamot.
The responsibility issues extend well beyond superalignment.
OpenAI Tells Logically Inconsistent Stories
A pattern in such situations is telling different stories to different people. Each of the stories is individually plausible, but they can’t live in the same world.
Ozzie Gooen explains the OpenAI version of this, here in EA Forum format (the below is a combination of both):
When You Put it Like That
A survey was done. You can judge for yourself whether or not this presentation was fair.
Thus, this question overestimates the impact, as it comes right after telling people such facts about OpenAI:
As usual, none of this means the public actually cares. ‘Increases the case for’ does not mean increases it enough to notice.
People Have Thoughts
Individuals paying attention are often… less kind.
Here are some highlights.
[links to two past articles of his discussing OpenAI unkindly.]
Ravi Parikh: If a company is caught doing multiple stupid & egregious things for very little gain
It probably means the underlying culture that produced these decisions is broken.
And there are dozens of other things you haven’t found out about yet.
Jonathan Mannhart (reacting primarily to the Scarlett Johansson incident, but centrally to the pattern of behavior): I’m calling it & ramping up my level of directness and anger (again):
OpenAI, as an organisation (and Sam Altman in particular) are often just lying. Obviously and consistently so.
This is incredible, because it’s absurdly stupid. And often clearly highly unethical.
Joe Weisenthal: I don’t have any real opinions on AI, AGI, OpenAI, etc. Gonna leave that to the experts.
But just from the outside, Sam Altman doesn’t ~seem~ like a guy who’s, you know, doing the new Manhattan Project. At least from the tweets, podcasts etc. Seems like a guy running a tech co.
Andrew Rettek: Everyone is looking at this in the context of AI safety, but it would be a huge story if any $80bn+ company was behaving this way.
Danny Page: This thread is important and drives home just how much the leadership at OpenAI loves to lie to employees and to the public at large when challenged.
Seth Burn: Just absolutely showing out this week. OpenAI is like one of those videogame bosses who looks human at first, but then is revealed to be a horrific monster after taking enough damage.
0.005 Seconds: Another notch in the “Altman lies likes he breathes” column.
Ed Zitron: This is absolutely merciless, beautifully dedicated reporting, OpenAI is a disgrace and Sam Altman is a complete liar.
Keller Scholl: If you thought OpenAI looked bad last time, it was just the first stage. They made all the denials you expect from a company that is not consistently candid: Piper just released the documents showing that they lied.
Paul Crowley: An argument I’ve heard in defence of Sam Altman: given how evil these contracts are, discovery and a storm of condemnation was practically inevitable. Since he is a smart and strategic guy, he would never have set himself up for this disaster on purpose, so he can’t have known.
Ronny Fernandez: What absolute moral cowards, pretending they got confused and didn’t know what they were doing. This is totally failing to take any responsibility. Don’t apologize for the “ambiguity”, apologize for trying to silence people by holding their compensation hostage.
I have, globally, severely downweighted arguments of the form ‘X would never do Y, X is smart and doing Y would have been stupid.’ Fool me [quite a lot of times], and such.
There is a Better Way
There is of course an actually better way, if OpenAI wants to pursue that. Unless things are actually much worse than they appear, all of this can still be turned around.
Should You Consider Working For OpenAI?
OpenAI says it should be held to a higher standard, given what it sets out to build. Instead, it fails to meet the standards one would set for a typical Silicon Valley business. Should you consider working there anyway, to be near the action? So you can influence their culture?
Let us first consider the AI safety case, and assume you can get a job doing safety work. Does Daniel Kokotajlo make an argument for entering the belly of the beast?
Even better, Daniel then get to keep his equity, whether or not OpenAI lets him sell it. My presumption is they will let him given the circumstances, I’ve created a market.
Most people who attempt this lack Daniel’s moral courage. The whole reason Daniel made a difference is that Daniel was the first person who refused to sign, and was willing to speak about it.
Do not assume you will be that courageous when the time comes, under both bribes and also threats, explicit and implicit, potentially both legal and illegal.
Similarly, your baseline assumption should be that you will be heavily impacted by the people with whom you work, and the culture of the workplace, and the money being dangled in front of you. You will feel the rebukes every time you disrupt the vibe, the smiles when you play along. Assume that when you dance with the devil, the devil don’t change. The devil changes you.
You will say ‘I have to play along, or they will shut me out of decisions, and I won’t have the impact I want.’ Then you never stop playing along.
The work you do will be used to advance OpenAI’s capabilities, even if it is nominally safety. It will be used for safety washing, if that is a plausible thing, and your presence for reputation management and recruitment.
Could you be the exception? You could. But you probably won’t be.
In general, ‘if I do not do the bad thing then someone else will do the bad thing and it will go worse’ is a poor principle.
Do not lend your strength to that which you wish to be free from.
What about ‘building career capital’? What about purely in your own self-interest? What if you think all these safety concerns are massively overblown?
Even there, I would caution against working at OpenAI.
That giant equity package? An albatross around your neck, used to threaten you. Even if you fully play ball, who knows when you will be allowed to cash it in. If you know things, they have every reason to not let you, no matter if you so far have played ball.
The working conditions? The nature of upper management? The culture you are stepping into? The signs are not good, on any level. You will hold none of the cards.
If you already work there, consider whether you want to keep doing that.
Also consider what you might do to gather better information, about how bad the situation has gotten, and whether it is a place you want to keep working, and what information the public might need to know. Consider demanding change in how things are run, including in the ways that matter personally to you. Also ask how the place is changing you, and whether you want to be the person you will become.
As always, everyone should think for themselves, learn what they can, start from what they actually believe about the world and make their own decisions on what is best. As an insider or potential insider, you know things outsiders do not know. Your situation is unique. You hopefully know more about who you would be working with and under what conditions, and on what projects, and so on.
What I do know is, if you can get a job at OpenAI, you can get a lot of other jobs too.
The Situation is Ongoing
As you can see throughout, Kelsey Piper is bringing the fire.
There is no doubt more fire left to bring.
If you have information you want to share, on any level of confidentiality, you can also reach out to me. This includes those who want to explain to me why the situation is far better than it appears. If that is true I want to know about it.
There is also the matter of legal representation for employees and former employees.
What OpenAI did to its employees is, at minimum, legally questionable. Anyone involved should better know their rights even if they take no action. There are people willing to pay your legal fees, if you are impacted, to allow you to consult a lawyer.
Here Vilfredo’s Ghost, a lawyer, notes that a valid contract requires consideration and a ‘meeting of the minds,’ and common law contract principles do not permit surprises. Since what OpenAI demanded is not part of a typical ‘general release,’ and the only consideration provided was ‘we won’t confiscate your equity’ or deny you the right to sell it, the contract looks suspiciously like it would be invalid.
Matt Bruenig has a track record of challenging the legality of similar clauses, and has offered his services. He notes that rules against speaking out about working conditions are illegal under federal law, but if they do not connect to ‘working conditions’ then they are legal. Our laws are very strange.
It seems increasingly plausible that it would be in the public interest to ban non-disparagement clauses more generally going forward, or at least set limits on scope and length (although I think nullifying existing contracts is bad and the government should not do that, and shouldn’t have done it for non-competes either.)
This is distinct from non-disclosure in general, which is clearly a tool we need to have. But I do think that, at least outside highly unusual circumstances, ‘non-disclosure agreements should not apply to themselves’ is also worth considering.
Thanks to the leverage OpenAI still holds, we do not know what other information is out there, as of yet not brought to light.
Repeatedly, OpenAI has said it should be held to a higher standard.
OpenAI instead under Sam Altman has consistently failed to live up not only to the standards to which one must hold a company building AGI, but also the standards one would hold an ordinary corporation. Its unique non-profit structure has proven irrelevant in practice, if this is insufficient for the new board to fire Altman.
This goes beyond existential safety. Potential and current employees and business partners should reconsider, if only for their own interests. If you are trusting OpenAI in any way, or its statements, ask whether that makes sense for you and your business.
Going forward, I will be reacting to OpenAI accordingly.
If that’s not right? Prove me wrong, kids. Prove me wrong.