Previously: OpenAI: Altman Returns, OpenAI: The Battle of the Board, OpenAI: Facts from a Weekend, additional coverage in AI#41.

We have new stories from The New York Times, from Time, from the Washington Post and from Business Insider.

All paint a picture consistent with the central story told in OpenAI: The Battle of the Board. They confirm key facts, especially Altman’s attempted removal of Toner from the board via deception. We also confirm that Altman promised to help with the transition when he was first fired, so we have at least one very clear cut case of Altman saying that which was not.

Much uncertainty remains, especially about the future, but past events are increasingly clear.

The stories also provide additional color and key details. This post is for those who want that, and to figure out what to think in light of the new details.

The most important new details are that NYT says that the board proposed and was gung ho on Brett Taylor, and says D’Angelo suggested Summers and grilled Summers together with Altman before they both agreed to him as the third board member. And that the new board is remaining quiet while it investigates, echoing the old board, and in defiance of the Altman camp and its wish to quickly clear his name.

The New York Times Covers Events

The New York Times finally gives its take on what happened, by Tripp Mickle, Mike Isaac, Karen Weise and the infamous Cade Metz (so treat all claims accordingly).

As with other mainstream news stories, the framing is that Sam Altman won, and this shows the tech elite and big money are ultimately in charge. I do not see that as an accurate description what happened or its implications, yet both the tech elite and its media opponents want it to be true and are trying to make it true through the magician’s trick of saying that it is true, because often power resides where people believe it resides.

I know that at least one author did read my explanations of events, and also I talked to a Times reporter not on the byline to help make everything clear, so they don’t have the excuse that no one told them. Didn’t ultimately matter.

Paul Graham is quoted as saying Altman is drawn to power more than money, as an explanation for why Altman would work on something that does not make him richer. I believe Graham on this, but also I think there are at least three damn good other reasons to do it, making the decision overdetermined.

  1. If Altman wants to improve his own lived experience and those of his friends and loved ones, building safe AGI, or ensuring no one else builds unsafe AGI, is the most important thing for him to do. Altman already has all the money he will ever need for personal purposes, more would not much improve his life. His only option is to instead enrich the world, and ensure humanity flourishes and also doesn’t die. Indeed, notice the rest of his portfolio includes a lot of things like fusion power and transformational medical progress. Even if Altman only cares about himself, these are the things that make his life better – by making everyone’s life better.
  2. Power and fame and prestige beget money. Altman does not have relevant amounts of equity in OpenAI, but he has used his position to raise money, to get good deal flow, and in general to be where the money resides. If Altman decided what he cared about was cash, he could easily turn this into cash. To be clear, I do not at all begrudge in general. I am merely not a fan of some particular projects, like ‘build a chip factory in the UAE.’
  3. AGI is the sweetest, most interesting, most exciting challenge in the world. Also the most important. If you thought your contribution would increase the chance things went well, why would you want to be working on anything else?

Pretty much every version of Altman I can imagine would want to be doing this.

The key description of the safety issue is structured in a way that it is easy to come away thinking this was a concern of the outside board members, but both in reality and if you read the article carefully, this applies to the entire board (although we have some uncertainty about Brockman in particular):

They were united by a concern that A.I. could become more intelligent than humans.

Remember that this was and is the explicit goal of OpenAI, to safely create AI more intelligent than humans, also known as AGI. Altman signed the CAIS letter, although Brockman is not known to have done so. Altman has made the threat here very clear. Everyone involved understands the danger. Everyone is, to their credit, talking price.

The first piece of news is that we have at least one case in which we can be damn sure that Sam Altman lied to the board, in at least some important senses.

Shocked that he was being fired from a start-up he had helped found, Mr. Altman widened his eyes and then asked, “How can I help?” The board members urged him to support an interim chief executive. He assured them that he would.

Within hours, Mr. Altman changed his mind and declared war on OpenAI’s board.

I point this out because it is a common theory that Altman was a master of Exact Words and giving implications. That yes he was deceptive and misleading and played power games, but he was too smart to outright say that which was not.

So here he is, saying that which is not.

Did it matter? Maybe no. But maybe quite a lot, actually. This cooperation could have been a key factor driving the decision not to detail the issues with Altman, at least initially, when it would have worked. If Altman is going to cooperate, what he gets in return is the mission continues and also whatever he did gets left unspecified.

The article waffles on whether or not Altman actually did declare war on the board that night. The statement above says so. Then they share a narrative of others driving the revolt, including Airbnb’s CEO Brian Chesky, the executives and employees, with Altman only slowly deciding to fight back.

It can’t be both. Which is it?

I assume the topline is correct. That Altman was fighting back the whole time. And that despite being willing to explicitly say that up top, Altman’s people sufficiently sculpted the media narrative to make the rest sound like events unfolded in a very different way. It is an absolute master class in narrative sculpting and media manipulation. They should teach this in universities. Chef’s kiss.

We have confirmation that Altman was not ‘consistently candid’ about the project to build chips in the UAE:

In September, Mr. Altman met investors in the Middle East to discuss an A.I. chip project. The board was concerned that he wasn’t sharing all his plans with it, three people familiar with the matter said.

For many obvious reasons, this is an area where the board would want to be informed, and any reasonable person in Altman’s position would know this, and norms say that this means they should be informed. But not informing them would not by default strictly violate the rules, as long as Altman honestly answered questions when asked. Did he, and to what extent? We don’t know.

Now we get into some new material.

Dr. Sutskever … believed that Mr. Altman was bad-mouthing the board to OpenAI executives, two people with knowledge of the situation said. Other employees have also complained to the board about Mr. Altman’s behavior.

In October, Mr. Altman promoted another OpenAI researcher to the same level as Dr. Sutskever, who saw it as a slight. Dr. Sutskever told several board members that he might quit, two people with knowledge of the matter said. The board interpreted the move as an ultimatum to choose between him and Mr. Altman, the people said.

Dr. Sutskever’s lawyer said it was “categorically false” that he had threatened to quit.

Another conflict erupted in October when Ms. Toner published a paper…

This frames Sutskever as having been in favor of firing Altman for some time. If this is true, the board’s sense of urgency, and its unwillingness to take time to plan and get its ducks in a row, makes even less sense. If they had been discussing the issue for months, if Ilya had been not only onboard but enthusiastic for a month, I don’t get it.

The post then goes over the incident over Toner’s ignored academic paper, for which Toner agreed to apologize to keep the peace.

“I did not feel we’re on the same page on the damage of all this,” Altman wrote.

We’re definitely not. Toner and I are on the page that this was trivial and obviously so. Altman was presenting it as a major deal.

Now we get to the core issue.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.”

“This significantly differs from Sam’s recollection of these conversations,” an OpenAI spokeswoman said, adding that the company was looking forward to an independent review of what transpired.

Time magazine gives this version:

Time: Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions. 

Whatever other reasons did or did not exist, if Altman did say that, my model of such things is that he needed to be fired and it was the board’s job to fire him. And the board really should have said so, rather than speaking in generalities.

Multiple witnesses are saying to NYT that he said it. Altman denies it.

It seems clear Altman did use private conversations with board members to give false impressions and drum up support for getting Toner off the board, thereby giving Altman board control, using the paper as an excuse. The dispute is whether Altman did it using Exact Words, or whether he lied. Altman called his attempt ‘ham fisted’ which I believe is power player code for ‘got caught lying’ but could also apply to ‘got caught technically-not-lying while implicitly lying my ass off.’

NYT does seem to be saying the board did step up their description a bit:

NYT: The board members said that Mr. Altman had lied to the board, but that they couldn’t elaborate for legal reasons.

Use of the word ‘lied’ is an escalation. And this is a clear confirmation of lawyers.

We also have confirmation of zero PR people, because we have Toner’s infamous line. I know the logic behind it but I still cannot believe that she actually said it out loud given the context, seriously WTF:

Jason Kwon, OpenAI’s chief strategy officer, accused the board of violating its fiduciary responsibilities. “It cannot be your duty to allow the company to die,” he said, according to two people with knowledge of the meeting.

Ms. Toner replied, “The destruction of the company could be consistent with the board’s mission.”

You say ‘We have no intention of doing any such thing. The company is perfectly capable of carrying on without Altman. We have every intention of continuing on OpenAI’s mission, led by the existing executive team. Altman promised to help with the transition in the board meeting. If he instead chooses to attempt to destroy OpenAI and its mission, that is his decision. It also proves he was incompatible with our mission and we needed to remove him.’

OpenAI’s executives insisted that the board resign that night or they would all leave. Mr. Brockman, 35, OpenAI’s president, had already quit.

The support gave Mr. Altman ammunition.

This sounds highly contingent.

Also the board had now already made an explicit bluff threatening to quit. The board called. The executives did not quit. Subsequent such threats become far less credible.

Skipping ahead a bit, they still tried this a second time.

By Nov. 19 [with the Microsoft offer in hand], Mr. Altman was so confident that he would be reappointed chief executive that he and his allies gave the board a deadline: Resign by 10 a.m. or everyone would leave.

Pro negotiation tip: Do not quickly pull this trick a second time once your first bluff gets called. It will not work. That is why you do not rush out the first bluff, and instead wait until your position is stronger.

Of course the board called the second bluff, appointing Emmett Shear.

The next piece of good information came before that deadline was set, which is that Bret Taylor was actually seen as a fair arbiter approved by both sides rather than being seen as in the Altman camp.

Yet even as the board considered bringing Mr. Altman back, it wanted concessions. That included bringing on new members who could control Mr. Altman. The board encouraged the addition of Bret Taylor, Twitter’s former chairman, who quickly won everyone’s approval and agreed to help the parties negotiate.

But also note that in this telling, it was the board that wanted concessions and in particular new board members rather than Altman. That directly contradicts other reports and does not make sense, unless you read it as ‘contingent on the old board agreeing to resign, they wanted concessions.’ As in, the board was going to hand over its control of OpenAI, and they wanted the concession of ‘we agree on who we give it to, and what those people agree will happen.’ At best, I find this framing bizarre.

Larry Summers was a suggestion of D’Angelo, in some key original reporting:

To break the impasse, Mr. D’Angelo and Mr. Altman talked the next day. Mr. D’Angelo suggested former Treasury Secretary Lawrence H. Summers, a professor at Harvard, for the board. Mr. Altman liked the idea.

Mr. Summers, from his Boston-area home, spoke with Mr. D’Angelo, Mr. Altman, Mr. Nadella and others. Each probed him for his views on A.I. and management, while he asked about OpenAI’s tumult. He said he wanted to be sure that he could play the role of a broker.

Mr. Summers’s addition pushed Mr. Altman to abandon his demand for a board seat and agree to an independent investigation of his leadership and dismissal.

So both sides talked to Summers, and were satisfied with his answers.

This week, Mr. Altman and some of his advisers were still fuming. They wanted his name cleared.

“Do u have a plan B to stop the postulation about u being fired its not healthy and its not true!!!” Mr. Conway texted Mr. Altman.

Mr. Altman said he was working with OpenAI’s board: “They really want silence but i think important to address soon.”

Overall this all makes me bullish on the new board. We might be in a situation with, essentially, D’Angelo and two neutral arbiters, albeit ones with gravitas and business connections. They are not kowtowing to Altman. Altman’s camp continues to fume (and somehow texts from Conway to Altman about it are leaking to NYT, theere are not many places that can come from).

Gwern offers their summary here.

Time Makes Altman CEO of the Year

Time profiled Altman, calling him ‘CEO of the year,’ a title he definitely earned. I think this is the best very short description so far, nailing the game theory:

Meanwhile, the company’s employees and its board of directors faced off in “a gigantic game of chicken,” says a person familiar with the discussions.

Sources also note the side of Altman that seeks power, and is willing to be dishonest and manipulative in order to get it.

But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”

This is the first mainstream report that correctly identifies the outcome as unclear:

It’s not clear if Altman will have more power or less in his second stint as CEO.

In addition to his other good picks, we can add… Georgist land taxes? Woo-hoo!

Altman has advocated for a land-value tax—a classic Georgist policy—in recent meetings with world leaders, he says. 

That is the kind of signal no one ever fakes. There really is a lot to love.

Including his honesty. I don’t want to punish it, but also I want to leave this here.

“We definitely accelerated the race, for lack of a more nuanced phrase,” Altman says. 

Time describes the board’s initial outreach to Altman this way:

Altman characterizes it as a request for him to come back. “I went through a range of emotions. I first was defiant,” he says. “But then, pretty quickly, there was a sense of duty and obligation, and wanting to preserve this thing I cared about so much.” The sources close to the board describe the outreach differently, casting it as an attempt to talk through ways to stabilize the company before it fell apart.

I am not saying we know for sure that this is another case of Altman lying (to Time rather than the board, a much less serious matter), but his version of events does not compute. If the board was actively asking for Altman to outright return, I do not buy that this was Altman’s reaction.

I could buy either half of Altman’s story: That Altman was asked to return, or that Altman was defiant to the board’s request and only did it out of duty and obligation (because the board was initially requesting something else.) I don’t buy both at once. It is entirely inconsistent with Paul Graham’s assessment of his character.

Washington Post Says Leaders Warned Altman was Abusive

The WaPo piece says that in the fall the board was approached by a small number of senior leaders at OpenAI, with concerns about Altman. In this telling, the board thought OpenAI stood to lose key leaders due to what they saw as Altman’s toxicity.

Now back at the helm of OpenAI, Altman may find that the company is less unitedthan the waves of heart emojis that greeted his return on social media might suggest.

That is always true. No large group is ever fully united, no matter what emojis says.

There are few concrete details. What details are offered sound like ordinary things that happen at a company. What is and is not abusive, in such a high-pressure and competitive environment, is in the eye of the beholder and highly context dependent. What is described here could reflect abuse, or it could reflect nothing of concern. What is concerning is that employees found it concerning enough to go to the board.

Beyond the one concrete detail of mangers going to the board with such complaints, this did not teach us much. It seems like those concerns helped confirm the board’s model of Altman’s behavior, and helped justify the decision on the margin.

Business Insider Says Microsoft Letter Was a Bluff

Business Insider says that OpenAI employees really, really did not want to go to work at Microsoft. I wouldn’t either. The employees might have largely still seen it as the least bad alternative under some circumstances, if Altman didn’t want to start a new company. And remember, the letter said they ‘might’ do it, not that they all definitely would.

AI Safety Memes offers the following quotes:

“[The letter] was an audacious bluff and most staffers had no real interest in working for Microsoft.”

“Many OpenAI employees “felt pressured” to sign the open letter.”

“Another OpenAI employee openly laughed at the idea that Microsoft would have paid departing staffers for the equity they would have lost by following Altman.” “It was sort of a bluff that ultimately worked.”

“The letter itself was drafted by a group of longtime staffers who have the most clout and money at stake with years of industry standing and equity built up, as well as higher pay. They began calling other staffers late on Sunday night, urging them to sign, the employee explained.”

Despite nearly everyone on staff signing up to follow Altman out the door, “No one wanted to go to Microsoft.” This person called the company “the biggest and slowest” of all the major tech companies.

“The bureaucracy of something as big as Microsoft is soul crushing.”

“Even though we have a partnership with Microsoft, internally, we have no respect for their talent bar,” the current OpenAI employee told BI. “It rubbed people the wrong way to entertain being managed by them.”

Beyond the culture clash between the two companies, there was another important factor at play for OpenAI employees: money. Lots of it was set to disappear before their eyes if OpenAI were to suddenly collapse under a mass exodus of staff.

“Sam Altman is not the best CEO, but millions and millions of dollars and equity are at stake,” the current OpenAI employee said.

Microsoft agreed to hire all OpenAI employees at their same level of compensation, but this was only a verbal agreement in the heat of the moment.

A scheduled tender offer, which was about to let employees sell their existing vested equity to outside investors, would have been canceled. All that equity would have been worth “nothing,” this employee said.

The former OpenAI employee estimated that, of the hundreds of people who signed the letter saying they would leave, “probably 70% of the folks on that list were like, ‘Hey, can we, you know, have this tender go through?'”

Some Microsoft employees, meanwhile, were furious that the company promised to match salaries for hundreds of OpenAI employees.”

Roon responds that this was not accurate:

Roon: not to longpost, and I can only speak for myself, but this is a very inaccurate representation of the mood from an employee perspective.

– “employees felt pressured” -> at some point hundreds of us were in a backyard learning about the petition. people were so upset at the insanity of the board’s decisions that they were immediately fired up to sign this thing. the google doc literally broke from the level of concurrency of people all trying to sign at once. I recall many having intelligent nuanced conversations about the petition, the wording thereof, and in the end coming to the conclusion that it was the only path forward. Half the company had signed between the hours of 2 and 3am. That’s not something that can be accomplished by peer pressure.

– “it was about the money” -> at the time it sounded like signing the petition meant leaving all openai equity and starting fresh. We’re not idiots, everybody knows that the terms at newco would be up in the air at best, with a lot of bargaining chips on Microsoft’s side. People signed the petition because it was the right thing to do. You simply cannot work at the gutted husk of a company whose ultimate leadership you don’t respect.

– “no one wanted to go to Microsoft” -> you’d have to be out of your mind to prefer starting new on models and code and products being controlled by someone else rather than building in the company specifically designed to be the vehicle for safe AGI. It has nothing to do with the Microsoft talent bar or bureaucracy or brand. Not sure why some idiot leaker provocateur would frame it this way. Microsoft has been quite successful at acquiring companies under bespoke governance structures and letting them do their own thing (GitHub, LinkedIn). Even Microsoft’s own preferred outcome was continuity of OpenAI per the New Yorker article. I still bet if the board hadn’t changed their mind the company would have mostly reconstituted itself at Microsoft.

I trust that Roon is giving his honest recollection of his experience here. I also believe the two stories are more compatible than he realizes.

The employees wanted Altman back, or barring that a board and CEO that could trust, without which they would leave, but they mostly wanted OpenAI intact, and ideally to get paid, and were furious with the board. They didn’t want to go to Microsoft, we will never know how many would have actually done it versus stayed versus gone elsewhere or founded new companies, or how long the board had before that time bomb went off in earnest. My guess is that a lot of employees go to Microsoft if Altman stays there, but a lot also choose other paths.

My guess is that both the majority of employees enthusiastically signed the letter, and also those who didn’t want to sign felt pressured to do so anyway and this got a bunch of the later signatures onboard. I know I would have felt pressured even if no one applied any pressure intentionally.

Wei Dei sees the employee actions at OpenAI and the signing of the petition as a kind of OpenAI cultural revolution that he did not think was possible at a place like that, and sees it as a huge negative update. I was less surprised, and also read less into the letter. There was good reason, from their perspective, to be outraged and demand Altman’s return. There was also good reason to sign the letter even if an individual employee did not support Altman – to hold the company together, and for internal political reasons even if no direct pressure was applied. Again, the letter said ‘may’ leave, so it did not commit you to anything.

Where Does That All Leave Us?

I will continue to link people to The Battle of the Board, which I believe remains the definitive synthesis of events. We now have additional detail supporting and fleshing out that narrative, but they do not alter the central story.

I am sure I will continue to often have a weekly section on developments, but hopefully things will slow down from here.

As I wrote previously, we now await the board’s investigation, and the composition of the new board. If the new board has a clear majority with a strong commitment to existential safety and the mission, and has the gravitas and experience necessary to do the job of the board, that would be a very good outcome, and if he did not do anything to render it impossible I would be happy to see Altman stay under such supervision.

If that proves not possible after an investigation, we will see who we get instead, I worry it will not be better but I also expect the company to then hold together, in a way it would not have if the board had not compromised, given how things had gone.

If the board instead ends up effectively captured by business interests and those who do not care about safety or OpenAI’s stated mission, that would be a catastrophe, whether or not Altman is retained.

If Altman ends up with effective board control and has free reign, then that is a highly worrisome outcome, and we get to find out to what extent Altman is truly aligned, wise and capable of resisting certain aspects his nature, versus the temptation to build and scale and seek power. It could end up fine, or be disastrous.

New Comment
9 comments, sorted by Click to highlight new comments since:
[-]mishka177

The board members urged him to support an interim chief executive. He assured them that he would.

Within hours, Mr. Altman changed his mind and declared war on OpenAI’s board.

I point this out because it is a common theory that Altman was a master of Exact Words and giving implications. That yes he was deceptive and misleading and played power games, but he was too smart to outright say that which was not.

So here he is, saying that which is not.

This does not seem to be a correct conclusion.

This says he promised them to support an interim chief executive. Well, his relationship with Mira quickly turned to be so great, that the board choose to fire her as an interim CEO for being too pro-Altman and to get someone else.

As far as we know, his relationship with Emmett was also cordial at all times during those 55 hours and 32 minutes.

And he did not promise to support the board.

So, in terms of Exact Words, he seems to be in the clear.

You say ‘We have no intention of doing any such thing. The company is perfectly capable of carrying on without Altman. We have every intention of continuing on OpenAI’s mission, led by the existing executive team. Altman promised to help with the transition in the board meeting. If he instead chooses to attempt to destroy OpenAI and its mission, that is his decision. It also proves he was incompatible with our mission and we needed to remove him.’

OpenAI's charter seems consistent with Toner's statement that "The destruction of the company could be consistent with the board’s mission." Here are some quotes:

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

...

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.

...

We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.

https://openai.com/charter

Telling Toner to stay quiet about the charter seems like telling a fire captain to stay quiet about the fact that trainee firefighters may someday need to enter a burning building.

My feeling: It's not Toner's fault that she reminded people about the charter. It's everyone else's fault for staying quiet about it. It's like if on the last day of firefighter training, one of the senior firefighters leading the training said "btw, being a firefighter sometimes means running into a burning building to save someone" and everyone was aghast -- "you're not supposed to mention that! you're gonna upset the trainees and scare them away!"

The entire situation seems a little absurd to me. In my mind, effective firefighter training means psychologically preparing a trainee to enter a burning building from day one. (I actually made a comment about the value of pre-visualization in emergency situations about a year ago.) Maybe OpenAI execs should have been reviewing the charter with employees at every all-hands meeting, psychologically preparing them for the possibility that they might someday need to e.g. destroy the company. It feels unfair to blame Toner that things got to the point they did.

[-]gwern42

Telling Toner to stay quiet about the charter seems like telling a fire captain to stay quiet about the fact that trainee firefighters may someday need to enter a burning building.

Or perhaps it would be more analogous to firefighting tactics historically, which, in the absence of extremely large mechanical pumps & water sources, frequently forced resorts to making firebreaks, which can involve bans, deliberate destruction, controlled burns, or in the case of fighting fires in cities, razing* entire streets down to the ground (using dynamite if you must).

No one wants to dynamite thousands of innocent peoples' homes, but if you are in the middle of the San Francisco earthquake and there is a city-wide fire raging and all the water pipes are busted (assuming you were lucky enough to be in a time & place that they ever existed)...

* if you like anime, the Short Peace anthology shows this in the "Combustible" short. It's spectacularly animated. Or, by sheer coincidence, I'm watching Promare about sci-fi fire fighting mecha, and the protagonist uses a traditional Japanese firefighting outfit with a hooked spear-like weapon to fight fire-aliens... and the original hook existed, of course, not to duel fire-aliens but to help demolish buildings in cities (whether or not they are on fire yet).

I think this is a good analogy. Though I think "one day you might have to dynamite a bunch of innocent people's homes to keep a fire from spreading, that's part of the job" is a good thing to have in the training if that's the sort of thing that's likely to come up.

Tangent:

AGI is the sweetest, most interesting, most exciting challenge in the world.

We usually concede this point and I don't even think it's true. Of course, even if I'm right, maybe we don't want to push in this direction in dialogue, because it would set the bad precedent of not defending ethics over coolness (and sooner or later something cool will be unethical). But let me discuss anyway.

Of course building AGI is very exciting, it incentivizes some good problem-solving. But doing it through Deep Learning and like OpenAI does has an indeleble undercurrent of "we don't exactly know what's going on, we're just stirring the linear algebra pile". Of course that can already be an incredibly interesting engineering problem, and it's not like you don't need a lot of knowledge and good intuitions to make these hacky things work. But I'm sure the aesthetic predispositions of many (and especially the more mathematically oriented) will line up way better with "actually understanding the thing". From this perspective Alignment, Deep Learning Theory, Decision Theory, understanding value formation, etc. feel fundamentally more interesting intellectual challenges. I share this feeling and I think many other people do. A lot of people have been spoiled by the niceness of math, and/or can't stand the scientific shallowness of ML developments.

[-][anonymous]20

At the end of the day it has to work. And you need scale. Someone has to pay for larger scale research.

The best way to achieve the understanding you seek is to first expand the AI industry to hundreds of thousands of people and a trillion in annual r&d and everything has a tpu in it.

Note that historically things like a detailed understanding of aerodynamics were achieved many decades after exploitation. The original aviation pioneers were dead when humans built large enough computers to model aerodynamics. Humans had already built millions of aircraft and optimized to the sr-71.

What actually happens if OpenAI gets destroyed? Presumably most of the former employees get together and work on another AI company, maybe sign up directly w/ Microsoft. And are now utterly polarized against AI Safety.

Reading this post, here's my new galaxy-brain take on Altman:

If we're going to solve AI alignment, Altman is making sure we first have to solve Altman alignment. If we can't even get hyper-optimizing human Altman aligned, we have no hope of aligning AI.

While a fun quip to make, it has little relation to the physical reality. There are technologically possible ways of aligning AIs that have no analogy for aligning humans, and there are approaches to aligning humans that are not available for aligning AIs. Solving one problem, thus, doesn't necessarily have anything to do with solving the other: you can be able to align an AGI without being able to align a flesh-and-blood human, and you could develop the ability to robustly align humans yet end up not one step closer to aligning an AGI.

I mean, I can't say this whole debacle doesn't have a funny allegorical meaning in the context of AI Alignment and OpenAI's chances of achieving it. But it's a funny allegory, not exact correspondence.