The Information reports that OpenAI is close to finalizing its transformation to an ordinary Public Benefit B-Corporation. OpenAI has tossed its cap over the wall on this, giving its investors the right to demand refunds with interest if they don’t finish the transition in two years.

Microsoft very much wants this transition to happen. They would be the big winner, with an OpenAI that wants what is good for business. This also comes at a time when relations between Microsoft and OpenAI are fraying, and OpenAI is threatening to invoke its AGI clause to get out of its contract with Microsoft. That type of clause is the kind of thing they’re doubtless looking to get rid of as part of this.

The $37.5 billion question is, what stake will the non-profit get in the new OpenAI?

For various reasons that I will explore here, I think they should fight to get quite a lot. The reportedly proposed quarter of the company is on the low end even if it was purely the control premium, and the board’s share of future profits is likely the bulk of the net present value of OpenAI’s future cash flows.

But will they fight for fair value? And will they win?

Table of Contents

  1. The Valuation in Question.
  2. The Control Premium.
  3. The Quest for AGI is OpenAI’s Telos and Business Model.
  4. OpenAI’s Value is Mostly in the Extreme Upside.

The Valuation in Question

Rocket Drew (The Information): Among the new details: After the split is finalized, OpenAI is considering creating a new board for the 501(c)3 charity that would be separate from the one that currently governs it, according to a person familiar with the plan.

If we had to guess, the current board, including CEO Sam Altman, will look for board of directors for the nonprofit who will stay friendly to the interests of the OpenAI corporation.

After the restructuring, the nonprofit is expected to own at least a 25% stake in the for-profit—which on paper would be worth at least $37.5 billion.

We asked the California attorney general’s office, which has jurisdiction over the nonprofit, what the AG makes of OpenAI’s pending conversion. A spokesperson wrote us back to say the agency is “committed to protecting charitable assets for their intended purpose.”

There is a substantial chance the true answer is zero, as Sam Altman it seems intends to coup against the non-profit a third time, altering the deal further and replacing the board whoever he wants, presumably giving him full control. What would California do about that?

There is also the question of what would happen with the US Federal Trade Commission inquiry into OpenAI and Microsoft potentially ‘distorting innovation and undermining fair competition,’ which to me looks highly confused but they are seemingly taking it seriously.

No matter the outcome on the control front, it still leaves the question of how much of the company the nonprofit should get. You can’t (in theory) take assets out of a 501c3 without paying fair market value. And the board has a fiduciary duty to get fair market value. California also says it will protect the assets, whatever that is worth. And the IRS will need to be satisfied with the amount chosen, or else.

There is danger the board won’t fight for its rights, not even for a fair market value:

Lynette Bye: In an ideal world, the charity’s board would bring in valuation lawyers to argue it out with the for-profit’s and investors’ lawyers, until they agree on how to divvy up the assets. But such an approach seems unlikely with the current board makeup. “I think the common understanding is they’re friendly to Sam Altman and the ones who were trying to slow things down or protect the non-profit purpose have left,” Loui said.

The trick is, they have a legal obligation to fight for that value, and Brett Taylor has said they are going to do so, although who knows how hard it will fight:

Thalia Beaty (AP): Jill Horwitz, a professor in law and medicine at UCLA School of Law who has studied OpenAI, said that when two sides of a joint venture between a nonprofit and a for-profit come into conflict, the charitable purpose must always win out.

“It’s the job of the board first, and then the regulators and the court, to ensure that the promise that was made to the public to pursue the charitable interest is kept,” she said.

Bret Taylor, chair of the OpenAI nonprofit’s board, said in a statement that the board was focused on fulfilling its fiduciary obligation.

“Any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission,” he said.

Even if they are friendly to Altman, that is different from willingly taking on big legal risks.

The good news is that, at a minimum, OpenAI and Microsoft have hired investment banks to negotiate with each other. Microsoft has Morgan Stanley, OpenAI has Goldman Sachs. So, advantage OpenAI. But that doesn’t mean that Goldman Sachs is arguing on behalf of the board.

The Control Premium

So would 25% of OpenAI represent ‘fair market value’ of the non-profit’s current assets, as required by law?

That question gets complicated, because OpenAI’s current structure is complicated.

Or, from the WSJ, is where the money goes:

Any profits would go first to for-profit equity holders in various configurations, whose gains are capped, and then the rest would go back to the non-profit, except if the ‘AGI clause’ is invoked, in which case it all goes back to the non-profit.

The board would also be giving up its control over OpenAI. It would go from 100% of the voting shares to 25%. Control typically comes with a large cost premium. Control over OpenAI seems especially valuable in terms of the charitable purpose of the non-profit. One could even say in context that it is priceless, but that ship seems to have sailed.

According to Wikipedia, the control premium varies from 20% to 40% in business practice, depending on minority shareholders’ protections. In this case, it is clear that minority shareholders’ protections are currently extremely thin, so this would presumably mean at least a 40% premium. That’s 40% of the total baseline value of OpenAI, not the value of the non-profit’s share of the company. That’s on top of the value of their claims on the profits.

OpenAI could have chosen to sidestep the control issue by giving the board a different class of shares that allow it to comfortably retain control over OpenAI, but it is everyone’s clear intention to strip control away from the board.

Lynette Bye attempts to analyze the situation, noting that no one has much of a clue. They suggest one potential upper bound:

Lynette Bye: The biggest clue comes from OpenAI’s recent tax filing, which claims that OpenAI does not have any “controlled entities,” as defined by the tax code. According to Rose Chan Loui, the director of UCLA Law’s non-profit program, this likely means that the non-profit has the right to no more than 50% of the company’s future profits. If that alone were the basis for its share of the for-profit’s value, that would cap the non-profit’s share of the valuation at $78.5 billion.

Claude thinks it is more complicated than that. In either case, the filing likely reflected what was convenient to represent to the government and investors – you don’t want prospective investors realizing a majority of future profits belong to the board, if that were indeed the case.

Lynette also says experts disagree on whether the control premium requires fair market compensation. I think it very obviously does require it – control is a valuable asset, both because people value control highly, and because control is highly useful to the non-profit mission. Again, pay me.

The Quest for AGI is OpenAI’s Telos and Business Model

What makes stock in the future OpenAI valuable?

One answer, same as any other investment, is that ‘other people will pay for it.’

That’s a great answer. But ultimately, what are all those people paying for?

Two things.

  1. Control. That’s covered by the control premium.
  2. The Net Present Value of Future Cash Flows.

So what is the NPV of future cash flows? What is the probability distribution of various potential cash flows? What is stock in OpenAI worth right now, if you were never allowed to sell it to a ‘greater fool’ and it never transitioned to a B-corp or changed its payout rules?

Well, actually… you can argue that the answer is nothing

Black text on a pink rectangle, listing a legal disclaimer from OpenAI

Was that not clear enough?

Well, okay. Not actually nothing. Things could change, and you could get paid then.

But the situation is actually rather grim.

Sam Altman’s goal is to create safe AGI for the benefit of humanity. He says this over and over again. I disagree with his methods, but I do believe that is his central goal.

To the extent he also cares about other things, such as being the one to pick what it means to benefit humanity, I don’t think ‘maximizing profits’ is high on that list.

OpenAI’s explicit goal is also to create safe AGI for the benefit of humanity.

That is their business model. That is the ‘public benefit’ in the public benefit corporation. That is their plan. That is their telos.

Right now, OpenAI’s plan is:

  1. Spend a lot of money to develop AGI first.
  2. ???????? (ensure it is safe and benefits humanity, yes this should be step 1 not 2)
  3. Profit. Maybe. If that even means anything at that point. Sure, why not.

If that last sentence sounds weird, go read the pink warning label again.

OpenAI already has billions in revenue. It plausibly has reasonable unit economics.

Altman is still planning to plow every penny OpenAI makes selling goods and services, and more, back into developing AGI.

If he believes he can ensure AGI is safe and benefits humanity (I have big doubts, but he seems confident), then this is the correct thing for Altman to be doing, even from a pure business perspective. That’s where the real value lies, and the amount of money that can go into research, including compute and even electrical power, is off the charts.

If OpenAI actually turned a profit after its investments and research, or was even seriously pivoting into trying, then that would be a huge red flag, the same way it would have been for an early Amazon or Uber. It would be saying they didn’t see a better use of money than returning it to shareholders.

OpenAI’s Value is Mostly in the Extreme Upside

What are the likely fates for OpenAI, for a common sense understanding of AGI?

I believe that case #1 here is most of why OpenAI is valuable now: If OpenAI successfully builds safe AGI, it is worth many trillions to the extent that one can put a cap on its value at all. If OpenAI fails to build safe AGI, it will be a pale shadow of that.

  1. OpenAI charges headfirst to AGI, and succeeds in building it safely. Many in the industry expect this to happen soon, within only a few years – Altman said a few thousand days. The world transforms, and OpenAI goes from previously unprofitable due to reinvestment to an immensely profitable company. It is able to well exceed all its profit caps. Even if they pay out the whole waterfall to the maximum, the vast majority of the money still flows to the non-profit.
  2. OpenAI charges headfirst into AGI, and succeeds in building it, but fails in ensuring it is safe. Tragedy ensues. OpenAI never turns a profit anyone gets to enjoy, whether or not humanity sticks around to recover.
  3. OpenAI charges headfirst into AGI, and fails, because someone else gets to AGI substantially first and builds on that lead. OpenAI never turns a profit, whether or not things turn out well for humanity.
  4. OpenAI charges headfirst into AGI, and fails, because no one develops AGI any time soon. OpenAI burns through its ability to raise money, and realizes its mission has failed. Talent flees. It attempts to pivot into an ordinary software company, up against a lot of competition, increasingly without much market power or differentiation as others catch up. OpenAI most likely ends up selling out to another tech company, perhaps with a good exit and perhaps not. It might melt away, as looked like might happen in the crisis last year. Or perhaps it successfully pivots and does okay, but it’s not world changing.

If you thought the bulk of the value here is in #4, and a pivot to an ordinary technology company, then your model strongly disagrees with those who founded and built OpenAI, and with the expectations of its employees. I don’t think Altman or OpenAI have any intention of going down that road anything other than kicking and screaming, and it will represent a failure of the company’s vision and business model.

Even in case #4, we’re talking about what Matt Levine estimates as a current profit cap of up to about $272 billion. I am guessing that is low, given the late investors are starting with higher valuations and we don’t know the profit caps. But even if we are generous, the result is the same. 

If the company is worth – not counting the non-profit’s share! – already $157 billion or more, it should be obvious that most future profits still likely flow to the non-profit. There’s no such thing as a company with very high variance in outcomes, that is heavily in growth mode, that is worth well over $157 billion dollars (since that $157 billion doesn’t include parts of the waterfall) where they don’t end up making trillions rather often. If you don’t think OpenAI is going to make trillions reasonably often, and also pay them out, then you should want to sell your stake, and fast.

Do not be fooled into thinking this is an ordinary or mature business, or that AI is an ordinary or mature technology whose value is in various forms of mundane utility. OpenAI is shooting for the stars. As every VC in this spot knows, it is the extreme upside that matters.

That is what the nonprofit is selling. They shouldn’t sell it cheap.

The good news is that the people tasked with arguing this are, effectively, Goldman Sachs. It will be fascinating to see if suddenly they can feel the AGI. 

26

New Comment
3 comments, sorted by Click to highlight new comments since:
  1. OpenAI charges headfirst to AGI, and succeeds in building it safely. [...] The world transforms, and OpenAI goes from previously unprofitable due to reinvestment to an immensely profitable company.

You need a case where OpenAI successfully builds safe AGI, which may even go on to build safe ASI, and the world gets transformed... but OpenAI's profit stream is nonexistent, effectively valueless, or captures a much smaller fraction than you'd think of whatever AGI or ASI produces.

Business profits (or businesses) might not be a thing at all in a sufficiently transformed world, and it's definitely not clear that preserving them is part of being safe.

In fact, a radical change in allocative institutions like ownership is probably the best case, because it makes no sense in the long term to allocate a huge share of the world's resources and production to people who happened to own some stock when Things Changed(TM). In a transformed-except-corporate-ownership-stays-the-same world, I don't see any reason such lottery winners' portion wouldn't increase asymptotically toward 100 percent, with nobody else getting anything at all.

Radical change is also a likely case[1]. If an economy gets completely restructured in a really fundamental way, it's strange if the allocation system doesn't also change. That's never happened before.

Even without an overtly revolutionary restructuring, I kind of doubt "OpenAI owns everything" would fly. Maybe corporate ownership would stay exactly the same, but there'd be a 99.999995 percent tax rate.


  1. Contingent on the perhaps unlikely safe and transformative parts coming to pass. ↩︎

In a transformed-except-corporate-ownership-stays-the-same world, I don't see any reason such lottery winners' portion wouldn't increase asymptotically toward 100 percent, with nobody else getting anything at all.

Well yeah, exactly.

Even without an overtly revolutionary restructuring, I kind of doubt "OpenAI owns everything" would fly. Maybe corporate ownership would stay exactly the same, but there'd be a 99.999995 percent tax rate.

Taxes enforced by whom?

Taxes enforced by whom?

Well, that's where the "safe" part comes in, isn't it?

I think a fair number of people would say that ASI/AGI can't be called "safe" if it's willing to wage war to physically take over the world on behalf of its owners, or to go around breaking laws all the time, or to thwart whatever institutions are supposed to make and enforce the laws. I'm pretty sure that even OpenAI's (present) "safety" department would have an issue if ChatGPT started saying stuff like "Sam Altman is Eternal Tax-Exempt God-King".

Personally, I go further than that. I'm not sure about "basic" AGI, but I'm pretty confident that very powerful ASI, the kind that would be capable of really total world domination, can't be called "safe" if it leaves really decisive power over anything in the hands of humans, individually or collectively, directly or via institutions. To be safe, it has to enforce its own ideas about how things should go. Otherwise the humans it empowers are probably going to send things south irretrievably fairly soon, and if they don't do so very soon they always still could, and you can't call that safe.

Yeah, that means you get exactly one chance to get "its own ideas" right, and no, I don't think that success is likely. I don't think it's technically unlikely to be able to "align" it to any particular set of values. I also don't think people or insitutions would make good choices about what values to give it even if they could. AND I don't think anybody can prevent it from getting built for very long. I put more hope in it being survivably unsafe (maybe because it just doesn't usually happen to care to do anything to/with humans), or on intelligence just not being that powerful, or whatever. Or even in it just luckily happening to at least do something less boring or annoying than paperclipping the universe or mass torture or whatever.