As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockman have been released. In March 2024 and December 2024 OpenAI also released blogposts with additional emails.

I have found reading through these really valuable, and I haven't found an online source that compiles all of them in an easy to read format. So I made one.[1]


Subject: question

Sam Altman to Elon Musk - May 25, 2015 9:10 PM

Been thinking a lot about whether it's possible to stop humanity from developing AI.

I think the answer is almost definitely not.

If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.

Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation.

Sam

Elon Musk to Sam Altman - May 25, 2015 11:09 PM

Probably worth a conversation

Sam Altman to Elon Musk - Jun 24, 2015 10:24 AM

The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement.

I think we’d ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have.

I think for a governance structure, we should start with 5 people and I’d propose you, Bill Gates, Pierre Omidyar, Dustin Moskovitz, and me. The technology would be owned by the foundation and used “for the good of the world”, and in cases where it’s not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board.

Will you be involved somehow in addition to just governance? I think that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you’d come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way in YC “part-time partners” (we do that with Peter Thiel for example, though at this point he’s very involved) but we could call it whatever you want. Even if you can’t really spend time on it but can be publicly supportive, that would still probably be really helpful for recruiting.

I think the right plan with the regulation letter is to wait for this to get going and then I can just release it with a message like “now that we are doing this, I’ve been thinking a lot about what sort of constraints the world needs for safefy.” I’m happy to leave you off as a signatory. I also suspect that after it’s out more people will be willing to get behind it.

Sam

Elon Musk to Sam Altman - Jun 24, 2015 11:05 PM

Agree on all

 

Subject: Re: AI docs 📎

Sam Altman to Elon Musk - Nov 20, 2015 10:48AM

Elon–

Plan is to have you, me, and Ilya on the Board of Directors for YC AI, which will be a Delaware non-profit. We will also state that we plan to elect two other outsiders by majority vote of the Board.

We will write into the bylaws that any technology that potentially compromises the safety of humanity has to get consent of the Board to be released, and we will reference this in the researchers’ employment contracts.

At a high level, does that work for you?

I’m cc’ing our GC <redacted> here–is there someone in your office he can work with on the details?

Sam

Elon Musk to Sam Altman - Nov 20, 2015 12:29PM

I think this should be independent from (but supported by) YC, not what sounds like a subsidiary.

Also, the structure doesn’t seem optimal. In particular, the YC stock along with a salary from the nonprofit muddies the alignment of incentives. Probably better to have a standard C corp with a parallel nonprofit.

 

Subject: follow up from call

Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015 6:11 PM

Hey Elon,

Nice chatting earlier.

As I mentioned on the phone, here's the latest early draft of the blog post: https://quip.com/6YnqA26RJgKr. (Sam, Ilya, and I are thinking about new names; would love any input from you.)

Obviously, there's a lot of other detail to change too, but I'm curious what you think of that kind of messaging. I don't want to pull any punches, and would feel comfortable broadcasting a stronger message if it feels right. I think it's mostly important that our messaging appeals to the research community (or at least the subset we want to hire). I hope for us to enter the field as a neutral group, looking to collaborate widely and shift the dialog towards being about humanity winning rather than any particular group or company. (I think that's the best way to bootstrap ourselves into being a leading research institution.)

I've attached the offer letter template we've been using, with a salary of $175k. Here's the email template I've been sending people:

Attached is your official YCR offer letter! Please sign and date the your convenience. There will be two more documents coming:

  • A separate letter offering you 0.25% of each YC batch you are present for (as compensation for being an Advisor to YC).
  • The At-Will Employment, Confidential Information, Invention Assignment and Arbitration Agreement

(As this is the first batch of official offers we've done, please forgive any bumpiness along the way, and please let me know if anything looks weird!)

We plan to offer the following benefits:

  • Health, dental, and vision insurance
  • Unlimited vacation days with a recommendation of four weeks per year
  • Paid parental leave
  • Paid conference attendance when you are presenting YC AI work or asked to attend by YC AI

We're also happy to provide visa support. When you're ready to talk about visa-related questions, I'm happy to put you in touch with Kirsty from YC.

Please let me know if you have any questions — I'm available to chat any time! Looking forward to working together :).

- gdb

Elon Musk to: Greg Brockman (cc Sam Altman) - Nov 22, 2015 7:48PM

Blog sounds good, assuming adjustments for neutrality vs being YC-centric.

I'd favor positioning the blog to appeal a bit more to the general public -- there is a lot of value to having the public root for us to succeed -- and then having a longer, more detailed and inside-baseball version for recruiting, with a link to it at the end of the general public version.

We need to go with a much bigger number than $100M to avoid sounding hopeless relative to what Google or Facebook are spending. I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn't provide.

Template seems fine, apart from shifting to a vesting cash bonus as default, which can optionally be turned into YC or potentially SpaceX (need to understand how much this will be) stock.

 

Subject: Draft opening paragraphs

Elon Musk to Sam Altman - Dec 8, 2015 9:29 AM

It is super important to get the opening summary section right. This will be what everyone reads and what the press mostly quotes. The whole point of this release is to attract top talent. Not sure Greg totally gets that.

---- OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns.

The underlying philosophy of our company is to disseminate AI technology as broadly as possible as an extension of all individual human wills, ensuring, in the spirit of liberty, that the power of digital intelligence is not overly concentrated and evolves toward the future desired by the sum of humanity.

The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

Sam Altman to Elon Musk - Dec 8, 2015 10:34 AM

how is this?

__

OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns.

Because we don't have any financial obligations, we can focus on the maximal positive human impact and disseminating AI technology as broadly as possible. We believe AI should be an extension of individual human wills and, in the spirit of liberty, not be concentrated in the hands of the few.

The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

 

Subject: just got word...

Sam Altman to Elon Musk - Dec 11, 2015 11:30AM

that deepmind is going to give everyone in openAI massive counteroffers tomorrow to try to kill it.

do you have any objection to me proactively increasing everyone's comp by 100-200k per year? i think they're all motivated by the mission here but it would be a good signal to everyone we are going to take care of them over time.

sounds like deepmind is planning to go to war over this, they've been literally cornering people at NIPS.

Elon Musk to Sam Altman - Dec 11, 2015

Has Ilya come back with a solid yes?

If anyone seems at all uncertain, I’m happy to call them personally too. Have told Emma this is my absolute top priority 24/7.

Sam Altman to Elon Musk - Dec 11, 2015 12:15 PM

yes committed committed. just gave his word.

Elon Musk to Sam Altman - Dec 11, 2015 12:32 PM

awesome

Sam Altman to Elon Musk - Dec 11, 2015 12:35 PM

everyone feels great, saying stuff like "bring on the deepmind offers, they unfortunately dont have 'do the right thing' on their side"

news out at 130 pm pst

 

Subject: The OpenAI Company

Elon Musk to: Ilya Sutskever, Pamela Vagata, Vicki Cheung, Diederik Kingma, Andrej Karpathy, John D. Schulman, Trevor Blackwell, Greg Brockman, (cc:Sam Altman) - Dec 11, 2015 4:41 PM

Congratulations on a great beginning!

We are outmanned and outgunned by a ridiculous margin by organizations you know well, but we have right on our side and that counts for a lot. I like the odds.

Our most important consideration is recruitment of the best people. The output of any company is the vector sum of the people within it. If we are able to attract the most talented people over time and our direction is correctly aligned, then OpenAI will prevail.

To this end, please give a lot of thought to who should join. If I can be helpful with recruitment or anything else, I am at your disposal. I would recommend paying close attention to people who haven't completed their grad or even undergrad, but are obviously brilliant. Better to have them join before they achieve a breakthrough.

Looking forward to working together,

Elon

 

Subject: Fwd: congrats on the falcon 9

<redacted> to: Elon Musk - Jan 2, 2016 10:12 AM CST

Hi Elon Happy new year to you, ██████████! 

Congratulations on landing the Falcon 9, what an amazing achievement. Time to build out the fleet now! 

I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem? There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world. Some of the more obvious points are well articulated in this blog post, that I'm sure you've seen, but there are also other important considerations: http://slatestarcodex.com/2015/12/17/should-ai-be-open/ 

I'd be interested to hear your counter-arguments to these points. 

Best,
████

[Elon forwards the above email to Sam Altman, Ilya Sutskever and Greg Brockman on Jan 2, 2016 8:18AM]

Ilya Sutskever to: Elon Musk, Sam Altman, Greg Brockman - Jan 2, 2016 9:06 AM

The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff. As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

Elon Musk to: Ilya Sutskever - Jan 2, 2016 9:11 AM

Yup

 

Subject: Re: Followup thoughts 📎

Elon Musk to: Ilya Sutskever, Greg Brockman, Sam Altman - Feb 19, 2016 12:05 AM

Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn't sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach. ███████████████████████████████

The probability of DeepMind creating a deep mind increases every year. Maybe it doesn't get past 50% in 2 to 3 years, but it likely moves past 10%. That doesn't sound crazy to me, given their resources.

In any event, I have found that it is far better to overestimate than underestimate competitors.

This doesn't mean we should rush out and hire weak talent. I agree that nothing good would be achieved by that. What we need to do is redouble our efforts to seek out the best people in the world, do whatever it takes to bring them on board and imbue the company with a high sense of urgency.

It will be important for OpenAI to achieve something significant in the next 6 to 9 months to show that we are for real. Doesn't need to be a whopper breakthrough, but it should be enough for key talent around the world to sit up and take notice.

████████████████████████████████████████████████████████████████████████████████████████████████████████████

Ilya Sutskever to: Elon Musk, (cc: Greg Brockman, Sam Altman) - Feb 19, 2016 10:28 AM

Several points:

  • It is not the case that once we solve "concepts," we get AI. Other problems that will have to be solved include unsupervised learning, transfer learning, and lifetime learning. We're also doing pretty badly with language right now. It does not mean that these problems will not see significant progress in the coming years, but it is not the case that there is only one problem that stands between us and full human level AI.
  • We can't build AI today because we lack key ideas (computers may be too slow, too, but we can't tell). Powerful ideas are produced by top people. Massive clusters help, and are very worth getting, but they play a less important role.
  • We will be able to achieve a conventionally significant result in the next 6 to 9 months, simply because the people we already have are very good. Achieving a field-altering result will be harder, riskier, and take longer. But we have a not unreasonable plan for that as well.

 

Subject: compensation framework

Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 21, 2016 11:34 AM

Hi all,

We're currently doing our first round of full-time offers post-founding. It's obviously super important to get these right, as the implications are very long-term. I don't yet feel comfortable making decisions here on my own, and would love any guidance.

Here's what we're currently doing:

Founding team: $275k salary + 25bps of YC stock

- Also have option of switching permanently to $125k annual bonus or equivalent in YC or SpaceX stock. I don't know if anyone's taken us up on this.

New offers: $175k annual salary + $125k annual bonus || equivalent in YC or SpaceX stock. Bonus is subject to performance review, where you may get 0% or significantly greater than 100%.

Special cases: gdb + Ilya + Trevor

The plan is to keep a mostly flat salary, and use the bonus multiple as a way to reward strong performers.

Some notes:

- We use a 20% annualized discount for the 8 years until the stock becomes liquid, the $125k bonus equates to 12bps in YC. So the terminal value is more like $750k. This number sounds a lot more impressive, though obviously it's hard to value exactly.

- The founding team was initially offered $175k each. The day after the lab launched, we proactively increased everyone's salary by $100k, telling them that we are financially committed to them as the lab becomes successful, and asking for a personal promise to ignore all counteroffers and trust we'll take care of them.

- We're currently interviewing Ian Goodfellow from Brain, who is one of the top 2 scientists in the field we don't have (the other being Alex Graves, who is a DeepMind loyalist). He's the best person on Brain, so Google will fight for him. We're grandfathering him into the founding team offer.

Some salary datapoints:

- John was offered $250k all-in annualized at DeepMind, thought he could negotiate to $300k easily.

- Wojciech was verbally offered ~$1.25M/year at FAIR (no concrete letter though)

- Andrew Tulloch is getting $800k/year at FB. (A lot is stock which is vesting.)

- Ian Goodfellow is currently getting $165k cash + $600k stock/year at Google.

- Apple is a bit desperate and offering people $550k cash (plus stock, presumably). I don't think anyone good is saying yes.

Two concrete candidates that are on my mind:

- Andrew is very close to saying yes. However, he's concerned about taking such a large paycut.

- Ian has stated he's not primarily concerned with money, but the Bay Area is expensive / wants to make sure he can buy a house. I don't know what will happen if/when Google starts throwing around the numbers they threw at Ilya.

My immediate questions:

1. I expect Andrew will try to negotiate up. Should we stick to his offer, and tell him to only join if he's excited enough to take that kind of paycut (and that others have left more behind)?

2. Ian will be interviewing + (I'm sure) getting an offer on Wednesday. Should we consider his offer final, or be willing to slide depending on what Google offers?

3. Depending on the answers to 1 + 2, I'm wondering if this flat strategy makes sense. If we keep it, I feel we'll have to really sell people on the bonus multiplier. Maybe one option would be using a signing bonus as a lever to get people to sign?

4. Very secondary, but our intern comp is also below market: $9k/mo. (FB offers $9k + free housing, Google offers like $11k/mo all-in.) Comp is much less important to interns than to FT people, since the experience is primary. But I think we may have lost a candidate who was on the edge to this. Given the dollar/hour is so much lower than for FT, should we consider increasing the amount?

I'm happy to chat about this at any time.

- gdb

Elon Musk to Greg Brockman, (cc: Sam Altman) - Feb 22, 2016 12:09 AM

We need to do what it takes to get the top talent. Let's go higher. If, at some point, we need to revisit what existing people are getting paid, that's fine.

Either we get the best people in the world or we will get whipped by Deepmind.

Whatever it takes to bring on ace talent is fine by me.

Deepmind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy. They are obviously making major progress and well they should, given the talent level over there.

Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 22, 2016 12:21 AM

Read you loud and clear. Sounds like a plan. Will plan to continue working with sama on specifics, but let me know if you'd like to be kept in the loop.

- gdb

 

Subject: wired article

Greg Brockman to Elon Musk, (cc: Sam Teller) - Mar 21, 2016 12:53 AM

Hi Elon,

I was interviewed for a Wired article on OpenAI, and the fact checker sent me some questions. Wanted to sync with you on two in particular to make sure they sound reasonable / aligned with what what you'd say:

Would it be accurate to say that OpenAI is giving away ALL of its research?

At any given time, we will take the action that is likely to most strongly benefit the world. In the short term, we believe the best approach is giving away our research. But longer-term, this might not be the best approach: for example, it might be better not to immediately share a potentially dangerous technology. In all cases, we will be giving away all the benefits of all of our research, and want those to accrue to the world rather than any one institution.

Does OpenAI believe that getting the most sophisticated AI possible in as many hands as possible is humanity's best chance at preventing a too-smart AI in private hands that could find a way to unleash itself on the world for malicious ends?

We believe that using AI to extend individual human wills is the most promising path to ensuring AI remains beneficial. This is appealing because if there are many agents with about the same capabilities they could keep any one bad actor in check. But I wouldn't claim we have all the answers: instead, we're building an organization that can both seek those answers, and take the best possible action regardless of what the answer turns out to be.

Thanks!

- gdb

Elon Musk to Greg Brockman, (cc: Sam Teller) - Mar 21, 2016 6:53:47 AM

Sounds good

 

Subject: Re: Maureen Dowd

Sam Teller received this email from Alex Thompson and forwards it to Elon Musk - April 27, 2016 7:25 AM

Hi Sam,

I hope you are having a great day and I apologize for interrupting it with another question. Maureen wanted to see if Mr. Musk had any reaction to some of Mr. Zuckerberg's public comments since their interview. In particular, his labelling of Mr. Musk as "hysterical" for his A.I. fears and lectured those who "fearmonger" about the dangers of A.I.. I have included more details below of Mr. Zuckerberg's comments.

Asked in Germany recently about Musk’s forebodings, Zuckerberg called them “hysterical’’ and praised A.I. breakthroughs, including one system he claims can make cancer diagnoses for skin lesions on a mobile phone with the accuracy of “the best dermatologist.’’

“Unless we really mess something up,’’ he said, the machines will always be subservient, not “superhuman.”

“I think we can build A.I. so it works for us and helps us...Some people fearmonger about how A.I. is a huge danger, but that seems farfetched to me and much less likely than disasters due to widespread disease, violence, etc.’’ Or as he put his philosophy at an April Facebook developers conference: “Choose hope over fear.’’

--
Alex Thompson
The New York Times

Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM

History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.

The recent example of Microsoft's AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person.

That is why we created OpenAI.

 

Subject: MSFT hosting deal

Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016 2:37 PM

Here are the MSFT terms. $60MM of compute for $10MM, and input from us on what they deploy in the cloud. LMK if you have any feedback.

Sam

Microsoft/OpenAI Terms[2]

Microsoft and OpenAI: Accelerate the development of deep learning on Azure and CNTK

This non-binding term sheet (“Term Sheet”) between Microsoft Corporation (“Microsoft”) and OpenAI (“OpenAI”) sets forth the terms for a potential business relationship between the parties. This Term Sheet is intended to form a basis of discussion and does not state all matters upon which agreement must be reached before executing a legally binding commercial agreement (“Commercial Agreement”). The existence and terms of this Term Sheet, and all discussions related thereto or to a Commercial Agreement, are Confidential Information as defined and governed by the Non-Disclosure Agreement between the parties dated 17 March, 2016 (“NDA”). Except for the binding nature of the foregoing confidentiality obligations, this Term Sheet is non-binding.

Deal Purpose
OpenAI is focused on deep learning in such a way as to benefit humanity. Microsoft and OpenAI desire to partner to enable the acceleration of deep learning on Microsoft Azure. Towards this goal, Microsoft will provide OpenAI with Azure compute capabilities at a favorable price that would enable OpenAI to continue their mission effectively.

Deal Business Goal
Microsoft
· Accelerate deep learning environment on Azure
· Attract a net new audience of next generation developers
· Joint PR and evangelism of deep learning on Azure
OpenAI
· Deeply discounted GPU compute offering over the deal term (3 years) for use in their nonprofit research: $60m of Compute for $10m
· Joint PR and evangelism of OpenAI on Azure

Parties (Legal entities) Microsoft OpenAI

Proposed Deal Execution Date September 19, 2016

Proposed Deal Commencement Date Same as deal execution date

Legal Authoring Microsoft holds the pen.

Deal Term 3 years

Engineering Terms
- Compute: Microsoft will provide OpenAI GPU core hours of compute at the agreed upon price for OpenAI’s workloads to run in Azure.
- Geographic Location: Geographic location decisions will be at Microsoft discretion depending on capacity and availability. Microsoft will also be responsible for sharing the deployment strategy and timelines with OpenAI.
- SLA: For all Virtual Machines that have two or more instances deployed in the same availability set, Microsoft guarantee OpenAI will have virtual machine connectivity to at least one instance at least 99.95% of the time. Microsoft will be held accountable to the SLA’s provided on https://azure.microsoft.com/enus/support/legal/sla/virtual-machines/v1_2/

- Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HDInsight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products

- Ramp: Microsoft and OpenAI will work together for creating a ramp plan that balances capacity per clusters. The initial timeline for ramp is a minimum of 30 days that will be augmented by Microsoft’s capacity expansion plans in the coming months.

- Capacity: OpenAI will be given an allocation of capacity in the preview cluster (located in US South Central) for short term requirements and Microsoft will provide quota access to the subsequent K80 GPU clusters that go live in the 4th quarter of 2016 with the intention of more capacity in Q1 2017 (calendar year).

Financial Terms
· Financial Terms: Microsoft will offer $60m worth of List Compute (including GPU) at a deep discount which results in a price of $10m to be paid by OpenAI over the course of the deal. In the event OpenAI consumes less than $10m worth of Azure compute, OpenAI will be responsible for paying the balance between the used amount and $10m at the end of the deal term to Microsoft.

Marketing & PR Terms
Microsoft and OpenAI commit to jointly evangelizing deep learning capabilities on Azure as agreed upon by both parties.
- Ignite: Announce the partnership at Microsoft’s Ignite event with executives (Sam Altman from OpenAI and Satya Nadella from Microsoft) from both parties inaugurating the collaboration
- PR: Microsoft and OpenAI will work together to issue a joint press release about the partnership including any materials such as blog posts and videos.

Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016 3:10 PM

This actually made me feel nauseous. It sucks and is exactly what I would expect from them.

Evaluation, Evangelization, and Usage of CNTK v2, Azure Batch and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HD-Insight for their research, provide feedback on how Microsoft can improve these products. OpenAI will work with Microsoft to evangelize these products to their research and developer ecosystems, and evangelize Microsoft Azure as their preferred public cloud provider. At their sole discretion, and as it makes sense for their research, OpenAI will adopt these products

Let’s just say that we are willing to have Microsoft donate spare computing time to OpenAI and have that be known, but we want do any contract or agree to “evangelize”. They can turn us off at any time and we can leave at any time.

Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016 3:33 PM

I had the same reaction after reading that section and they've already agreed to drop.

We had originally just wanted space cycles donated but the team wanted more certainty that capacity will be available. But I'll work with MSFT to make sure there are no strings attached

Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016

We should just do this low key. No certainty either way. No contract.

Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016 6:45 PM

ok will see how much $ I can get in that direction.

Sam Teller to Elon Musk - Sep 20, 2016 8:05 PM

Microsoft is now willing to do the agreement for a full $50m with “good faith effort at OpenAI's sole discretion” and full mutual termination rights at any time. No evangelizing. No strings attached. No looking like lame Microsoft marketing pawns. Ok to move ahead?

Elon Musk to Sam Teller - Sep 21, 2016 12:09 AM

Fine by me if they don't use this in active messaging. Would be worth way more than $50M not to seem like Microsoft's marketing bitch.

 

Subject: Bi-weekly updates 📎

Ilya Sutskever to: Greg Brockman, [redacted], Elon Musk - Jun 12, 2017 10:39 PM

This is the first of our bi-weekly updates. The goal is to keep you up to date, and to help us make greater use from your visits.

Compute:

  • Compute is used in two ways: it is used to run a big experiment quickly, and it is used to run many experiments in parallel.
  • 95% of progress comes from the ability to run big experiments quickly. The utility of running many experiments is much less useful.
  • In the old days, a large cluster could help you run more experiments, but it could not help with running a single large experiment quickly.
  • For this reason, an academic lab could compete with Google, because Google's only advantage was the ability to run many experiments. This is not a great advantage.
  • Recently, it has become possible to combine 100s of GPUs and 100s of CPUs to run an experiment that's 100x bigger than what is possible on a single machine while requiring comparable time. This has become possible due to the work of many different groups. As a result, the minimum necessary cluster for being competitive is now 10–100x larger than it was before.
  • Currently, every Dota experiment uses 1000+ cores, and it is only for the small 1v1 variant, and on extremely small neural network policies. We will need more compute to just win the 1v1 variant. To win the full 5v5 game, we will need to run fewer experiments, where each experiment is at least 1 order of magnitude larger (possibly more!).
  • TLDR: What matters is the size and speed of our experiments. In the old days, a big cluster could not let anyone run a larger experiment quickly. Today, a big cluster lets us run a large experiment 100x faster.
  • In order to be capable of accomplishing our projects even in theory, we need to increase the number of our GPUs by a factor of 10x in the next 1–2 months (we have enough CPUs). We will discuss the specifics in our in-person meeting.

Dota 2:

  • We will solve the 1v1 version of the game in 1 month. Fans of the game care about 1v1 a fair bit.
  • We are now at a point where a single experiment consumes 1000s of cores, and where adding more distributed compute increases performance.
  • Here is a cool video of our bot doing something rather clever: https://www.youtube.com/watch?v=Y-vxbREX5ck&feature=youtu.be&t=99.

Rapid learning of new games:

  • Infra work is underway
  • We implemented several baselines
  • Fundamentally, we're not where we want to be, and are taking action to correct this.

Robotics:

Self play as a key path to AGI:

  • Self play in multiagent environments is magical: if you place agents into an environment, then no matter how smart (or not smart) they are, the environment will provide them with the exact level of challenge, which can be faced only by outsmarting the competition. So for example, if you have a group of children, they will find each other's company to be challenging; likewise for a collection of super intelligences of comparable intelligence. So the "solution" to self-play is to become more and more intelligent, without bound.
  • Self-play lets us get "something out of nothing." The rules of a competitive game can be simple, but the best strategy for playing this game can be immensely complex. [motivating example: https://www.youtube.com/watch?v=u2T77mQmJYI].
  • Training agents in simulation to develop very good dexterity via competitive fighting, such as wrestling. Here is a video of ant-shaped robots that we trained to struggle: <redacted>
  • Current work on self-play: getting agents to learn to develop a language [gifs in https://blog.openai.com/learning-to-cooperate-compete-and-communicate/].Agents are doing "stuff," but it's still work in progress.

We have a few more cool smaller projects. Updates to be presented as they produce significant results.

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman, [redacted]) - Jun 12, 2017 10:52 PM

Thanks, this is a great update.

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman, [redacted]) - Jun 13, 2017 10:24 AM

Ok. Let's figure out the least expensive way to ensure compute power is not a constraint...

 

Subject: The business of building AGI 📎

Ilya Sutskever to: Elon Musk, Greg Brockman - Jul 12, 2017 1:36 PM

We usually decide that problems are hard because smart people have worked on them unsuccessfully for a long time. It’s easy to think that this is true about AI. However, the past five years of progress have shown that the earliest and simplest ideas about AI — neural networks — were right all along, and we needed modern hardware to get them working.

Historically, AI breakthroughs have consistently happened with models that take between 7–10 days to train. This means that hardware defines the surface of potential AI breakthroughs. This is a statement about human psychology more than about AI. If experiments take longer than this, it’s hard to keep all the state in your head and iterate and improve. If experiments are shorter, you’ll just use a bigger model.

It’s not so much that AI progress is a hardware game, any more than physics is a particle accelerator game. But if our computers are too slow, no amount of cleverness will result in AGI, just like if a particle accelerator is too small, we have no shot at figuring out how the universe works. Fast enough computers are a necessary ingredient, and all past failures may have been caused by computers being too slow for AGI.

Until very recently, there was no way to use many GPUs together to run faster experiments, so academia had the same “effective compute” as industry. But earlier this year, Google used two orders of magnitude more compute than is typical to optimize the architecture of a classifier, something that usually requires lots of researcher time. And a few months ago, Facebook released a paper showing how to train a large ImageNet model with near-linear speedup to 256 GPUs (given a specially-configured cluster with high-bandwidth interconnects).

Over the past year, Google Brain produced impressive results because they have an order of magnitude or two more GPUs than anyone. We estimate that Brain has around 100k GPUs, FAIR has around 15–20k, and DeepMind allocates 50 per researcher on question asking, and rented 5k GPUs from Brain for AlphaGo. Apparently, when people run neural networks at Google Brain, it eats up everyone’s quotas at DeepMind.

We're still missing several key ideas necessary for building AGI. How can we use a system's understanding of “thing A” to learn “thing B” (e.g. can I teach a system to count, then to multiply, then to solve word problems)? How do we build curious systems? How do we train a system to discover the deep underlying causes of all types of phenomena — to act as a scientist? How can we build a system that adapts to new situations on which it hasn’t been trained on precisely (e.g. being asked to apply familiar concepts in an unfamiliar situation)? But given enough hardware to run the relevant experiments in 7–10 days, history indicates that the right algorithms will be found, just like physicists would quickly figure out how the universe works if only they had a big enough particle accelerator.

There is good reason to believe that deep learning hardware will speed up 10x each year for the next four to five years. The world is used to the comparatively leisurely pace of Moore’s Law, and is not prepared for the drastic changes in capability this hardware acceleration will bring. This speedup will happen not because of smaller transistors or faster clock cycles; it will happen because like the brain, neural networks are intrinsically parallelizable, and new highly parallel hardware is being built to exploit this.

Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots (though no one should pass the Turing test). In as little as four years, each overnight experiment will feasibly use so much compute capacity that there’s an actual chance of waking up to AGI, given the right algorithm — and figuring out the algorithm will actually happen within 2–4 further years of experimenting with this compute in a competitive multiagent simulation.

To be in the business of building safe AGI, OpenAI needs to:

  • Have the best AI results each year. In particular, as hardware gets exponentially better, we’ll have dramatically better results. Our DOTA and Rubik’s cube projects will have impressive results for the current level of compute. Next year’s projects will be even more extreme, and what’s realistic depends primarily on what compute we can access.
  • Increase our GPU cluster from 600 GPUs to 5000 GPUs ASAP. As an upper bound, this will require a capex of $12M and an opex of $5–6M over the next year. Each year, we’ll need to exponentially increase our hardware spend, but we have reason to believe AGI can ultimately be built with less than $10B in hardware.
  • Increase our headcount: from 55 (July 2017) to 80 (January 2018) to 120 (January 2019) to 200 (January 2020). We’ve learned how to organize our current team, and we’re now bottlenecked by number of smart people trying out ideas.
  • Lock down an overwhelming hardware advantage. The 4-chip card that <redacted> says he can build in 2 years is effectively TPU 3.0 and (given enough quantity) would allow us to be on an almost equal footing with Google on compute. The Cerebras design is far ahead of both of these, and if they’re real then having exclusive access to them would put us far ahead of the competition. We have a structural idea for how to do this given more due diligence, best to discuss on a call.

2/3/4 will ultimately require large amounts of capital. If we can secure the funding, we have a real chance at setting the initial conditions under which AGI is born. Increased funding needs will come lockstep with increased magnitude of results. We should discuss options to obtain the relevant funding, as that’s the biggest piece that’s outside of our direct control.

Progress this week:

  • We’ve beat our top 1v1 test player (he’s top 30 in North America at 1v1, and beats the top 1v1 player about 30% of the time), but the bot can also be exploited by playing weirdly. We’re working on understanding these exploits and cracking down on them.
    • Repeated from Saturday, here’s the first match where we beat our top test player: https://www.youtube.com/watch?v=FBoUHay7XBI&feature=youtu.be&t=345
    • Every additional day of training makes the bot stronger and harder to exploit.
  • Robot getting closer to solving Rubik’s cube.
    • The improved cube simulation teleoperated by a human: <redacted>.
  • Our defense against adversarial examples is starting to work on ImageNet.
    • We will completely solve the problem of adversarial examples by the end of August.

████████████████████████████████████

██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████

███████████████████████████████████████████████████████████████████████████████████████████████████████████████

 

iMessages on OpenAI for-profit 📎

Shivon Zilis to: Greg Brockman - Jul 13, 2017 10:35 PM

How did it go?

Greg Brockman to: Shivon Zilis - Jul 13, 2017 10:35 PM

Went well!

ocean: agreed on announcing around the international; he suggested playing against the best player from the winning team which seems cool to me. I asked him to call <redacted> and he said he would. I think this is better than our default of announcing in advance we’ve beaten the best 1v1 player and then having our bot playable at a terminal at TI ████ ██████████ █████████████████ ██.

gpus: said do what we need to do

cerebras: we talked about the reverse merger idea a bit. independent of cerebras, turned into talking about structure (he said non-profit was def the right one early on, may not be the right one now — ilya and I agree with this for a number of reasons). He said he’s going to Sun Valley to ask <redacted> to donate.

Shivon Zilis to: Greg Brockman - Jul 13, 2017 10:43 PM

<redacted> and others. Will try to work it for ya.

 

Subject: biweekly update

Ilya Sutskever to Elon Musk, Greg Brockman - Jul 20, 2017 1:56 PM

- The robot hand can now solve a Rubik's cube in simulation:

https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIenlLdzN2LXdiOTQ/view?usp=sharing (needs OpenAI login)

Physical robot will do same in September

- 1v1 bot is no longer exploitable

It can no longer be beaten using “unconventional” strategies

On track to beat all humans in 1 month

- Athletic competitive robots:

https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIZE4wNVdlbkx6U2M/view?usp=sharing (needs OpenAI login)

- Released an adversarial example that fools a camera from all angles simultaneously:

https://blog.openai.com/robust-adversarial-inputs/

- DeepMind's directly used one of our algorithms to produce their parkour results:

DeepMind's results: https://deepmind.com/blog/producing-flexible-behaviourssimulated-environments/

DeepMind's technical papers explicitly state they directly used our algorithms

Our blogpost about our algorithm: https://blog.openai.com/openai-baselines-ppo/ (DeepMind used an older version).

- Coming up:

Designing the for-profit structure

Negotiate merger terms with Cerebras

More due diligence with Cerebras

 

Subject: Beijing Wants A.I. to Be Made in China by 2030 - NYTimes.com 📎

Elon Musk to: Greg Brockman, Ilya Sutskever - Jul 21, 2017 3:34 AM

They will do whatever it takes to obtain what we develop. Maybe another reason to change course. [Link to news article]

Greg Brockman to: Elon Musk, (cc: Ilya Sutskever) - Jul 21, 2017 1:18 PM

100% agreed. We think the path must be:

  1. AI research non-profit (through end of 2017)
  2. AI research + hardware for-profit (starting 2018)
  3. Government project (when: ??)

█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████

-gdb

Elon Musk to: Greg Brockman, (cc: Ilya Sutskever, [redacted]) - Jul 21, 2017 1:18 PM

Let's talk Sat or Sun. I have a tentative game plan that I'd like to run by you.

 

Subject: Tomorrow afternoon 📎

Elon Musk to: Greg Brockman, Ilya Sutskever, Sam Altman, (cc: [redacted], Shivon Zilis) - Aug 11, 2017 9:17 PM

████████████████████████████████ Are you guys able to meet or do a conf call tomorrow afternoon?

Time to make the next step for OpenAI. This is the triggering event.

 

 

Subject: OpenAI notes

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Aug 28, 2017 12:01 AM

Elon,

As I'd mentioned, Greg had asked to talk through a few things this weekend. Ilya ended up joining, and they pretty much just shared all of what they are still trying to think through. This is the distillation of that random walk of a conversation... came down to 7 unanswered questions with their commentary below. Please note that I'm not advocating for any of this, just structuring and sharing the information I heard.

1. Short-term control structure? 

-Is the requirement for absolute control? They wonder if there is a scenario where there could be some sort of creative overrule provision if literally everyone else disagreed on direction (not just the three of them, but perhaps a broader board)?

2. Duration of control and transition? 

-*The* non-negotiable seems to be an ironclad agreement to not have any one person have absolute control of AGI if it's created. Satisfying this means a situation where, regardless of what happens to the three of them, it's guaranteed that power over the company is distributed after the 2-3 year initial period.

3. Time spent?

-How much time does Elon want to spend on this, and how much time can he actually afford to spend on this? In what timeframe? Is this an hour a week, ten hours a week, something in between?

4. What to do with time spent?

-They don't really know how he prefers to spend time at his other companies and how he'd want to spend his time on this. Greg and Ilya are confident they could build out SW / ML side of things pretty well. They are not confident on the hardware front. They seemed hopeful Elon could spend some time on that since that's where they are weak, but did want his help in all domains he was interested in.

5. Ratio of time spent to amount of control?

-They are cool with less time / less control, more time / more control, but not less time / more control. Their fear is that there won't be enough time to discuss relevant contextual information to make correct decisions if too little time is spent.

6. Equity split?

-Greg still instinctually anchored on equal split. I personally disagree with him on that instinct and he asked for and was receptive to hearing other things he could use to recalibrate his mental model.

-Greg noted that Ilya in some ways has contributed millions by leaving his earning potential on the table at Google.

-One concern they had was the proposed employee pool was too small.

7. Capitalization strategy?

-Their instinct is to raise much more than $100M out of the gate. They are of the opinion that the datacenter they need alone would cost that so they feel more comfortable raising more.

Takeaways:

Unsure if any of this is amenable but just from listening to all of the data points they threw out, the following would satisfy their current sticky points:

-Spending 5-10 hours a week with near full control, or spend less time and have less control.

-Having a creative short-term override just for extreme scenarios that was not just Greg / Sam / Ilya.

-An ironclad 2-3yr minority control agreement, regardless of the fates of Greg / Sam / Ilya.

-$200M-$1B initial raise.

-Greg and Ilya's stakes end up higher than 1/10 of Elon's but not significantly (this remains the most ambiguous).

-Increasing employee pool.

Elon Musk to Shivon Zilis, (cc: Sam Teller) - Aug 28, 2017 12:08 AM

This is very annoying. Please encourage them to go start a company. I've had enough.

 

iMessages on majority equity and board control 📎

Shivon Zilis to: Greg Brockman - Sep 4, 2017 8:19 PM

Actually I'm still slightly confused on the proposed detail around the share % and board control

Given it sounds like proposal is that Elon always gets max(3 seats, 25% of seats) and all the power rests with the board

Greg Brockman

Yes. Though I am guessing he intended your overrule provision for first bit but I'm not sure

Shivon Zilis

So what power does having a certain % of shares have?

Sounds like intention is static board members, or at least board members coming statically from certain pools

But yeah would be curious to hear the specifics. Also I guess for even board sizes a 50% means no action?

Greg Brockman

I think it would grow to at least 7 pretty quick. The question is not that but when does it transition to traditional board if in fact transitions

And he sounded fairly non-negotiable on his equity being between 50-60 so moot point of having majority

 

Subject: Re: Current State 📎

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman) - Sep 13, 2017 12:40 AM

Sounds good. The three common stock seats (you, Greg and Sam) should be elected by common shareholders. They will de facto be yours, but not in the unlikely event that you lose the faith of a huge percentage of common stockholders over time or step away from the company by choice.

I think that the Preferred A investment round (supermajority me) should have the right to appoint four (not three) seats. I would not expect to appoint them immediately, but, like I said I would unequivocally have initial control of the company, but this will change quickly.

The rough target would be to get to a 12 person board (probably more like 16 if this board really ends up deciding the fate of the world) where each board member has a deep understanding of technology, at least a basic understanding of AI and strong & sensible morals.

Apart from the Series A four and the Common three, there would likely be a board member with each new lead investor/ally. However, the specific individual new board members can only be added if all but one existing board member agrees. Same for removing board members.

There will also be independent board members we want to add who aren't associated with an investor. Same rules apply: requires all but one of existing directors to add or remove.

I'm super tired and don't want to overcomplicate things, but this seems approx right. At the sixteen person board level, we would have 7/16 votes and I'd have a 25% influence, which is my min comfort level. That sounds about right to me. If everyone else we asked to join our board is truly against us, we should probably lose.

As mentioned, my experience with boards (assuming they consist of good, smart people) is that they are rational and reasonable. There is basically never a real hardcore battle where an individual board vote is pivotal, so this is almost certainly (sure hope so) going to be a moot point.

As a closing note, I've been really impressed with the quality of discussion with you guys on the equity and board stuff. I have a really good feeling about this.

Lmk if above seems reasonable.

Elon

 

Subject: Honest Thoughts

Ilya Sutskever to Elon Musk, Sam Altman, (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017 2:08 PM

Elon, Sam,

This process has been the highest stakes conversation that Greg and I have ever participated in, and if the project succeeds, it'll turn out to have been the highest stakes conversation the world has seen. It's also been a deeply personal conversation for all of us.

Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake. We have several important concerns that we haven't raised with either of you. We didn't raise them because we were afraid to: we were afraid of harming the relationship, having you think less of us, or losing you as partners.

There is some chance that our concerns will prove to be unresolvable. We really hope it's not the case, but we know we will fail for sure if we don't all discuss them now. And we have hope that we can work through them and all continue working together.

Elon:

We really want to work with you. We believe that if we join forces, our chance of success in the mission is the greatest. Our upside is the highest. There is no doubt about that. Our desire to work with you is so great that we are happy to give up on the equity, personal control, make ourselves easily firable — whatever it takes to work with you.

But we realized that we were careless in our thinking about the implications of control for the world. Because it seemed so hubristic, we have not been seriously considering the implications of success.

The current structure provides you with a path where you end up with unilateral absolute control over the AGI. You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you.

As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO.

Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary. We disagree with your statement that our ability to leave is our greatest power, because once the company is actually on track to AGI, the company will be much more important than any individual.

The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.

We have a few smaller concerns, but we think it's useful to mention it here:

In the event we decide to buy Cerebras, my strong sense is that it'll be done through Tesla. But why do it this way if we could also do it from within OpenAI? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAI's mission. So the overall result may not end up being optimal for OpenAI.

We believe that OpenAI the non-profit was successful because both you and Sam were in it. Sam acted as a genuine counterbalance to you, which has been extremely fruitful. Greg and I, at least so far, are much worse at being a counterbalance to you. We feel this is evidenced even by this negotiation, where we were ready to sweep the long-term AGI control questions under the rug while Sam stood his ground.

Sam:

When Greg and I are stuck, you've always had an answer that turned out to be deep and correct. You've been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don't know how structure decisions will play out over the next month, year, or five years.

But we haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function.

We don't understand why the CEO title is so important to you. Your stated reasons have changed, and it's hard to really understand what's driving it.

Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?

Greg and Ilya:

We had a fair share of our own failings during this negotiation, and we'll list some of them here (Elon and Sam, I'm sure you'll have plenty to add...):

During this negotiation, we realized that we have allowed the idea of financial return 2-3 years down the line to drive our decisions. This is why we didn't push on the control — we thought that our equity is good enough, so why worry? But this attitude is wrong, just like the attitude of AI experts who don't think that AI safety is an issue because they don't really believe that they'll build AGI.

We did not speak our full truth during the negotiation. We have our excuses, but it was damaging to the process, and we may lose both Sam and Elon as a result.

There's enough baggage here that we think it's very important for us to meet and talk it out. Our collaboration will not succeed if we don't. Can all four of us meet today? If all of us say the truth, and resolve the issues, the company that we'll create will be much more likely to withstand the very strong forces it'll experience.

- Greg & Ilya

Elon Musk to Ilya Sutskever (cc: Sam Altman; Greg Brockman; Sam Teller; Shivon Zilis) - Sep 20, 2017 2:17PM

Guys, I've had enough. This is the final straw. Either go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I'm just being a fool who is essentially providing free funding for you to create a startup. Discussions are over

Elon Musk to Ilya Sutskever, Sam Altman (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017 3:08PM

To be clear, this is not an ultimatum to accept what was discussed before. That is no longer on the table.

Sam Altman to Elon Musk, Ilya Sutskever (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 21, 2017 9:17 AM

i remain enthusiastic about the non-profit structure!

 

Subject: Non-profit

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017 9:50 AM

Hi Elon,

Quick FYI that Greg and Ilya said they would like to continue with the non-profit structure. They know they would need to provide a guarantee that they won't go off doing something else to make it work.

Haven't spoken to Altman yet but he asked to talk this afternoon so will report anything I hear back.

If anything I can do to help let me know.

Elon Musk to Shivon Zilis (cc: Sam Teller) - Sep 22, 2017 10:01 AM

Ok

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017 5:54 PM

From Altman:

Structure: Great with keeping non-profit and continuing to support it.

Trust: Admitted that he lost a lot of trust with Greg and Ilya through this process. Felt their messaging was inconsistent and felt childish at times.

Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend.

Fundraising: Greg and Ilya have the belief that 100's of millions can be achieved with donations if there is a definitive effort. Sam thinks there is a definite path to 10's of millions but TBD on more. He did mention that Holden was irked by the move to for-profit and potentially offered more substantial amount of money if OpenAI stayed a non-profit, but hasn't firmly committed. Sam threw out a $100M figure for this if it were to happen.

Communications: Sam was bothered by how much Greg and Ilya keep the whole team in the loop with happenings as the process unfolded. Felt like it distracted the team. On the other hand, apparently in the last day almost everyone has been told that the for-profit structure is not happening and he is happy about this at least since he just wants the team to be heads down again.

Shivon

 

Subject: ICO

Sam Altman to Elon Musk (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018 5:08 PM

Elon—

Heads up, spoke to some of the safety team and there were a lot of concerns about the ICO and possible unintended effects in the future.

Planning to talk to the whole team tomorrow and invite input. Going to emphasize the need to keep this confidential, but I think it's really important we get buy-in and give people the chance to weigh in early.

Sam

Elon Musk to Sam Altman (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018 5:56 PM

Absolutely

 

Subject: Top AI institutions today

Andrej Karpathy to Elon Musk, (cc: Shivon Zilis) - Jan 31, 2018 1:20 PM

The ICLR conference (which is the top deep learning - specific conference (NIPS is larger, but more diffuse)) released their decisions for accepted/rejected papers, and someone made some nice plots that show where the current deep learning / AI research happens at. It's an imperfect measure because not every company might prioritize paper publications, but it's indicative.

Here's a plot that shows the total number of papers (broken down by oral/poster/workshop/rejected) from any institution:

Long story short, Google is dominating with 83 paper submissions. The academic institutions (Berkeley / Stanford / CMU / MIT) are next, in 20-30 ranges each.

Just thought it was an interesting snapshot of where all the action is today. The full data is here: http://webia.lip6.fr/~pajot/dataviz.html

-Andrej

Elon Musk to Greg Brockman, Ilya Sutskever, Sam Altman, (cc: Sam Teller, Shivon Zilis, fw: Andrej Karpathy) - Jan 31, 2018 2:02 PM

OpenAI is on a path of certain failure relative to Google. There obviously needs to be immediate and dramatic action or everyone except for Google will be consigned to irrelevance.

I have considered the ICO approach and will not support it. In my opinion, that would simply result in a massive loss of credibility for OpenAI and everyone associated with the ICO. If something seems too good to be true, it is. This was, in my opinion, an unwise diversion.

The only paths I can think of are a major expansion of OpenAI and a major expansion of Tesla AI. Perhaps both simultaneously. The former would require a major increase in funds donated and highly credible people joining our board. The current board situation is very weak.

I will set up a time for us to talk tomorrow. To be clear, I have a lot of respect for your abilities and accomplishments, but I am not happy with how things have been managed. That is why I have had trouble engaging with OpenAI in recent months. Either we fix things and my engagement increases a lot or we don’t and I will drop to near zero and publicly reduce my association. I will not be in a situation where the perception of my influence and time doesn’t match the reality.

Elon Musk to Andrej Karpathy - Jan 31, 2018 2:07 PM

fyi

What do you think makes sense? Happy to talk by phone if that’s better.

Greg Brockman to: Elon Musk, (cc: Ilya Sutskever, Sam Altman, [redacted], Shivon Zilis) - Jan 31, 2018 10:56 PM 📎

Hi Elon,

Thank you for the thoughtful note. I have always been impressed by your focus on the big picture, and agree completely we must change trajectory to achieve our goals. Let's speak tomorrow, any time 4p or later will work.

My view is that the best future will come from a major expansion of OpenAI. Our goal and mission are fundamentally correct, and that will increasingly be a superpower as AGI grows near.

Fundraising

Our fundraising conversations show that:

  • Ilya and I are able to convince reputable people that AGI can really happen in the next ≤10 years
  • There's appetite for donations from those people
  • There's very large appetite for investments from those people

I respect your decision on the ICO idea, which matches the evolution of our own thinking. Sam Altman has been working on a fundraising structure that does not rely on a public offering, and we will be curious to hear your feedback.

Of the people we've been talking to, the following people are currently my top suggestions for board members. Would also love suggestions for your top picks not on this list, and we can figure out how to approach them.

  • <redacted>
  • <redacted>
  • <redacted>
  • <redacted>
  • <redacted>
  • <redacted> (she heads Partnership on AI, originally created by Demis to steal OpenAI's thunder – would bring a lot of outside credibility)

The next 3 years 

Over the next 3 years, we must build 3 things:

  • Custom AI hardware (such as <redacted> computer)
  • Massive AI data center (likely multiple revs thereof)
  • Best software team, mixing between algorithm development, public demonstrations, and safety

We've talked the most about the custom AI hardware and AI data center. On the software front, we have a credible path (self-play in a competitive multiagent environment) which has been validated by Dota and AlphaGo. We also have identified a small but finite number of limitations in today's deep learning which are barriers to learning from human levels of experience. And we believe we uniquely are on trajectory to solving safety (at least in broad strokes) in the next three years.

We would like to scale headcount in this way:

  • Beginning of 2017: ~40
  • End of 2018: 100
  • End of 2019: 300
  • End of 2020: 900

█████████████ █████████████████ ██████

Moral high ground 

Our biggest tool is the moral high ground. To retain this, we must:

  • Try our best to remain a non-profit. AI is going to shake up the fabric of society, and our fiduciary duty should be to humanity.
  • Put increasing effort into the safety/control problem, rather than the fig leaf you've noted in other institutions. It doesn't matter who wins if everyone dies. Related to this, we need to communicate a "better red than dead" outlook — we're trying to build safe AGI, and we're not willing to destroy the world in a down-to-the-wire race to do so.
  • Engage with government to provide trusted, unbiased policy advice — we often hear that they mistrust recommendations from companies such as ████████████.
  • Be perceived as a place that provides public good to the research community, and keeps the other actors honest and open via leading by example.

The past 2 years 

I would be curious to hear how you rate our execution over the past two years, relative to resources. In my view:

  • Over the past five years, there have two major demonstrations of working systems: AlphaZero [DeepMind] and Dota 1v1 [OpenAI]. (There are a larger number of breakthroughs of "capabilities" popular among practitioners, the top of which I'd say are: ProgressiveGAN [NVIDIA], unsupervised translation [Facebook], WaveNet [DeepMind], Atari/DQN [DeepMind], machine translation [Ilya at Google — now at OpenAI], generative adversarial network [Ian Goodfellow at grad school — now at Google], variational autoencoder [Durk at grad school — now at OpenAI], AlexNet [Ilya at grad school — now at OpenAI].) We benchmark well on this axis.
  • We grew very rapidly in 2016, and in 2017 iterated to a working management structure. We're now ready to scale massively, given the resources. We lose people on comp currently, but pretty much only on comp. I've been resuming the style of recruiting I did in the early days, and believe I can exceed those results.
  • We have the most talent dense team in the field, and we have the reputation for it as well.
  • We don't encourage paper writing, and so paper acceptance isn't a measure we optimize. For the ICLR chart Andrej sent, I'd expect our (accepted papers)/(people submitting papers) to be the highest in the field.

- gdb

Andrej Karpathy to Elon Musk - Jan 31, 2018 11:54 PM

Working at the cutting edge of AI is unfortunately expensive. For example, DeepMind's operating expenses in 2016 were at around $250M USD (does not include compute). With their growing team today it might be ~0.5B/yr. But then Alphabet in 2016 reported ~20B net income so it's still fairly cheap even if DeepMind had no revenue of its own. In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).

I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.

It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out "for free", because any advances are fairly easy for them to copy and immediately incorporate, at scale.

A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it's unclear if a company could "catch up" to Google scale, and the investors might exert too much pressure in the wrong directions.

The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the "first stage" of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The "second stage" would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.

I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.

-Andrej

Elon Musk to Ilya Sutskever, Greg Brockman - Feb 1, 2018 3:52 AM

[Forwarded previous message from Andrej]

Andrej is exactly right. We may wish it otherwise, but, in my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero.

 

Subject: AI updates 

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Mar 25, 2018 11:03AM

OpenAI

Fundraising:

-No longer doing the ICO / “instrument to purchase compute in advance” type structure. Altman is thinking through an instrument where the 4-5 large corporates who are interested can invest with a return capped at 50x if OpenAI does get to some semblance of money-making AGI. They apparently seem willing just for access reasons. He wants to discuss with you in more detail.

Formal Board Resignation:

-You're still technically on the board so need to send a quick one liner to Sam Altman saying something like “With this email I hereby resign as a director of OpenAI, effective Feb 20th 2018”.

Future Board:

-Altman said he is cool with me joining then having to step off if I become conflicted, but is concerned that others would consider it a burned bridge if I had to step off. I think best bet is not to join for now and be an ambiguous advisor but let me know if you feel differently. They have Adam D’Angelo as the potential fifth to take your place, which seems great?

TeslaAI

Andrej has three candidates in pipeline, may have 1-2 come in to meet you on Tuesday. He will send you a briefing note about them. Also, he’s working on starter language for a potential release that will be ready to discuss Tuesday. It will follow the “full-stack AI lab” angle we talked about but, if that doesn’t feel right, please course correct... is tricky messaging.

Cerebras

Chip should be available in August for them to test, and they plan to let others have remote access in September. The Cerebras guy also mentioned that a lot of their recent customer interest has been from companies upset about the Nvidia change in terms of service (the one that forces companies away from consumer grade GPUs to enterprise Pascals / Voltas). Scott Gray and Ilya continue to spend a bunch of time with them

 

Subject: The OpenAI Charter

Sam Altman to Elon Musk, (cc: Shivon Zilis) - Apr 2, 2018 1:54 PM

We are planning to release this next week--any thoughts?

The OpenAI Charter

OpenAI's mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically-valuable creative work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:

Broadly Distributed Benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always assiduously act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

Long-Term Safety

We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case by case agreements, but a typical triggering condition might be "a better-than-even chance of success in the next 2 years".

Technical Leadership

To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities — policy and safety advocacy alone would be insufficient.

We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.

Cooperative Orientation

We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.

We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

Elon Musk to Sam Altman - Apr 2, 2018 2:45PM

Sounds fine 

 

Subject: AI updates (continuation)

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Apr 23, 2018 1:49 AM (continued from thread with same subject above)

Updated info per a conversation with Altman. You’re tentatively set to speak with him on Tuesday.

Financing:

-He confirmed again that they are definitely not doing an ICO but rather equity that has a fixed maximum return.

-Would be a rather unique subsidiary structure for the raise which he wants to walk you through.

-Wants to move within 4-6 week on first round (probably largely Reid money, potentially some corporates).

Tech:

Says Dota 5v5 looking better than anticipated.

-The sharp rise in Dota bot performance is apparently causing people internally to worry that the timeline to AGI is sooner than they’d thought before.

-Thinks they are on track to beat Montezuma’s Revenge shortly.

Time allocation:

-I’ve reallocated most of the hours I used to spend with OpenAI to Neuralink and Tesla. This naturally happened with you stepping off the board and related factors — but if you’d prefer I pull more hours back to OpenAI oversight please let me know.

-Sam and Greg asked if I’d be on their informal advisory board (just Gabe Newell so far), which seems fine and better than the formal board given potential conflicts? If that doesn’t feel right let me know what you’d prefer.

 

Subject: Re: OpenAI update 📎

Sam Altman to: Elon Musk - Dec 17, 2018 3:42 PM

Hey Elon–

In Q1 of next year, we plan to do a final Dota tournament with any pro team that wants to play for a large cash prize, on an unrestricted game. After that, we'll call model-free RL complete, and a subset of the team will work on re-solving 1v1 Dota with model-based RL.

We also plan to release a number of robot hand demos in Q1–Rubiks cube, pen tricks, and Chinese ball rotation. The learning speed for new tasks should be very fast. Later in the year, we will try mounting two hands on two arms and see what happens...

We are also making fast progress on language. My hope is that next year we can generate short stories and a good dialogue bot.

████████████████████████████████████████████████████████████████████████████ The hope is that we can use this unsupervised learning to build models that can do hard things, eg never make an image classification mistake a human wouldn't, which to me would imply some level of conceptual understanding.

We are also making good progress in the multi-agent environment, with multiple agents now collaborating to build simple structure, play laser tag, etc.

Finally, I am working on a deal to move our computing from Google to Microsoft (in addition to our own data centers).

Also happy to talk about fundraising (we should have enough for the next ~2 years even with aggressive growth) and evolving hardware thoughts if helpful, would prefer not to put either of those in email but maybe next time you're at Pioneer?

Sam

Elon Musk to: Sam Altman - Dec 17, 2018 3:47 PM

Sounds good

Could probably meet Wed eve in SF

 

Subject: I feel I should reiterate 📎

Elon Musk to: Ilya Sutskever, Greg Brockman, (cc: Sam Altman, Shivon Zilis) - Dec 26, 2018 12:07 PM

My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise.

Even raising several hundred million won't be enough. This needs billions per year immediately or forget it.

Unfortunately, humanity's future is in the hands of <redacted>. [Link to article]

█████████████████████████████████████████████████████████████████████████████████████████████████████████████████

OpenAI reminds me of Bezos and Blue Origin. They are hopelessly behind SpaceX and getting worse, but the ego of Bezos has him insanely thinking that they are not!

I really hope I'm wrong.

Elon

 

Subject: OpenAI

Sam Altman to Elon Musk, (cc: Sam Teller, Shivon Zilis) - Mar 6, 2019 3:13PM

Elon—

Here is a draft post we are planning for Monday. Anything to add/edit?

TL;DR:

  • We've created the capped-profit company and raised the first round, led by Reid and Vinod.
  • We did this is a way where all investors are clear that they should never expect a profit.
  • We made Greg chairman and me CEO of the new entity.
  • We have tested this structure with potential next-round investors and they seem to like it.

Speaking of the last point, we are now discussing a multi-billion dollar investment which I would like to get your advice on when you have time. Happy to come see you some time you are in the bay area.

Sam

Draft post[3]

We've created OpenAI LP, a new "capped-profit" company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.

Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.

Due to the exponential growth of compute investments in the field, we’ve needed to scale much faster than we’d planned when starting OpenAI. We expect to need to raise many billions of dollars in upcoming years for large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.

We haven’t been able to raise that much money as a nonprofit, and though we considered becoming a for-profit, we were afraid that doing so would mean giving up our mission.

Instead, we created a new company, OpenAI LP, as a hybrid for-profit and nonprofit — which we are calling a "capped-profit" company.

The fundamental idea of OpenAI LP is that investors and employees can get a fixed return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount — and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP — are owned by the original OpenAI Nonprofit entity.

Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit”.

The mission comes first

We’ve designed OpenAI LP to put our overall mission — ensuring the creation and adoption of safe and beneficial AGI — over generating returns for investors.

To minimize conflicts of interest with the mission, OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.

Our employee and investor paperwork starts like this. The general partner refers to OpenAI Nonprofit (whose official name is “OpenAI Inc”); limited partners refers to investors and employees.

Only a minority of board members can hold financial stakes in the partnership. Furthermore, only board members without such stakes are allowed to vote on decisions where the interests of limited partners and the nonprofit’s mission may conflict — including any decisions about making payouts to investors and employees.

Corporate structure

Another provision from our paperwork specifies that the nonprofit retains control. (The paperwork uses OpenAI LP’s official name “OpenAI, L.P.”.)

As mentioned above, economic returns for investors and employees are capped (with the cap negotiated in advance on a per-limited partner basis). Any excess returns are owned by the nonprofit. Our goal is to ensure that most of the value we create if successful is returned to the world, so we think this is an important first step. Returns for our first round of investors are capped to 100x their investment, and we expect this multiple to be lower for future rounds.

What OpenAI does

Our day-to-day work remains the same. Today, we believe we can build the most value by focusing exclusively on developing new AI technologies, not commercial products. Our structure gives us flexibility for how to make money in the long term, but we hope to figure that out only once we’ve created safe AGI (though we’re open to non-distracting revenue sources such as licensing in the interim).

OpenAI LP currently employs around 100 people organized into three main areas: capabilities (advancing what AI systems can do), safety (ensuring those systems are aligned with human values), and policy (ensuring appropriate governance for such systems). OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language.

Safety

We are concerned about AGI’s potential to cause rapid change, whether through machines pursuing goals misspecified by their operator, malicious humans subverting deployed systems, or an out-of-control economy that grows without resulting in improvements to human lives. As described in our Charter, we are willing to merge with a value-aligned organization (even if it means reduced or zero payouts to investors) to avoid a competitive race which would make it hard to prioritize safety.

Who’s involved

OpenAI Nonprofit’s board consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley.

Elon Musk left the board of OpenAI Nonprofit in February 2018 and is not involved with OpenAI LP.

Our investors include Reid Hoffman and Khosla Ventures.

We are traveling a hard and uncertain path, but we have designed our structure to help us positively affect the world should we succeed in creating AGI. If you’d like to help us make this mission a reality, we’re hiring :)!

 

Subject: Bloomberg: AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm

Elon Musk to Sam Altman - Mar 11, 2019 3:04PM

Please be explicit that I have no financial interest in the for-profit arm of OpenAI

AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm

Bloomberg

OpenAI, the San Francisco-based artificial intelligence research group co-founded by Elon Musk and several other prominent Silicon Valley entrepreneurs, is starting a for-profit arm that will allow it to raise more money. Read the full story

Shared from Apple News

Sam Altman to Elon Musk - Mar 11, 2019 3:11PM

on it

 

iMessage between Elon Musk and Sam Altman 📎

Elon Musk to: Sam Altman – October 23, 2022 3:06AM

Elon here

New Austin number

I was disturbed to see OpenAI with a $20B valuation. De facto. I provided almost all the seed, A and most of B round funding.

https://www.theinformation.com/articles/openai-valued-at-nearly-20-billion-in-advanced-talks-with-microsoft-for-more-funding 

This is a bait and switch

 

iMessage between Sam Altman and Shivon Zilis 📎

Sam Altman - 8:07 AM

got this from elon, what do you suggest: <screenshot of iMessage between Elon and Shivon from October 23>

you've offered the stock thing to him in the past and he said no right?

i don't know what he means by A and B round

Shivon Zilis

It's unclear what the actual issue is here. Not having stock, it still being the same entity in public belief that he initially funding (OpenAI name), or just disagreement with the direction

It was offered to him and was declined at the time. I don't recall what actual path was decided on for that. I thought you asked him directly at one point if I'm not mistaken?

Call if you'd like additional context, but overall recommendation is don't text back immediately

Sam Altman - 9:13 PM

quick read if you have a second?

I agree this feels bad—we offered you equity when we established the cap profit, which you didn't want at the time but we are still very happy to do if you'd like. We saw no alternative to a structure change given the amount of capital we needed and still to preserve a way to 'give the AGI to humanity' other than the capped profit thing, which also lets the board cancel all equity if needed for safety. Fwiw I personally have no equity and never have. Am trying to navigate tricky tightrope the best I can and would love to talk about how it can be better any time you are free. Would also love to show you recent updates.

Sam Altman - 10:50 PM

(i sent)

got this back: I will be in SF most of this week for the Twitter acquisition. Let's talk Tues or Wed.

Shivon Zilis 

Sorry was asleep! Long few days. Great

 

  1. ^

    Emails and messages from the OpenAI blogposts are indicated with a 📎 emoji. Unless otherwise indicated the source for an email are the Musk v. Altman court documents.

  2. ^

    Edited to add collapsible section, original was just a large text block.

  3. ^

    Edited to add collapsible section.

New Comment
81 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]habryka1175

Update: I have now cross-referenced every single email for accuracy, cleaned up and clarified the thread structure, and added subject lines and date stamps wherever they were available. I now feel comfortable with people quoting anything in here without checking the original source (unless you are trying to understand the exact thread structure of who was CC'd and when, which was a bit harder to compress into a linear format).

(For anyone curious, the AI transcription and compilation made one single error, which is that it fixed a typo in one of Sam's messages from "We did this is a way" to "We did this in a way". Honestly, my guess is any non-AI effort would have had a substantially higher error rate, which was a small update for me on the reliability of AI for something like this, and also makes the handwringing about whether it is OK post something like this feel kind of dumb. It also accidentally omitted one email with a weird thread structure.)

[-]cata248

Thanks for not only doing this but noting the accuracy of the unchecked transcript, it's always hard work to build a mental model of how good LLM tools are at what stuff.

0ryan_b
Huzzah for assembling conversations! With this proof of concept, I wonder how easy it will be to deploy inside of LessWrong here.
0Thomas Kehrenberg
I wonder if it would be a good idea to put editor's notes after likely typos, like:
4Haiku
That requires interpretation, which can introduce unintended editorializing. If you spotted the intent, the rest of the audience can as well. (And if the audience is confused about intent, the original recipients may have been as well.) I personally would include these sorts of notes about typos if I was writing my own thoughts about the original content, or if I was sharing a piece of it for a specific purpose. I take the intent of this post to be more of a form of accessible archiving.

I've read leaked emails from people in similar situations before that made a couple things apparent:

  1. Power talk happens on the phone for paper trail reasons
  2. There is no meeting where an actual rational discussion of considerations and theories of change happens, everything really is people flying by the seat of their pants even at highest level. Talk of ethics usually just gets you excluded from the power talk.

I concluded this from the lack of any such talk in meeting minutes that are recorded, and the lack of any reference to such considerations in 'previous conversations' or requests to set up such meetings.

[-]PeterH195

There is no meeting where an actual rational discussion of considerations and theories of change happens, everything really is people flying by the seat of their pants even at highest level. Talk of ethics usually just gets you excluded from the power talk.

This seems overstated. E.g. Musk and Altman both read Superintelligence, and they both met Bostrom at least once in 2015. Sam published reflective blog posts on AGI in 2015, and it's clear that the OpenAI founders had lengthy, reflective discussions from the YC Research days onwards.

My personal experience was that superintelligence made it harder to think clearly about AI by making lots of distinctions and few claims.

[-]Akash5315

A few quotes that stood out to me:

Greg:

I hope for us to enter the field as a neutral group, looking to collaborate widely and shift the dialog towards being about humanity winning rather than any particular group or company. 

Greg and Ilya (to Elon):

The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.

Greg and Ilya (to Altman):

But we haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function.

We don't understand why the CEO title is so important to you. Your stated reasons have changed, and it's hard to really understand what's driving it.

Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?

Also this:

From Altman: [...] Admitted that he lost a lot of trust with Greg and Ilya through this process. Felt their messaging was inconsistent and felt childish at times. [...] Sam was bothered by how much Greg and Ilya keep the whole team in the loop with happenings as the process unfolded. Felt like it distracted the team.

Apparently airing such concerns is "childish" and should only be done behind closed doors, otherwise it "distracts the team", hm.

I thought the part you quoted was quite concerning, also in the context of what comes afterwards: 

Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend.

Sure, the email by Sutskever and Brockman gave some nonviolent communication vibes and maybe it isn't "the professional thing" to air one's feelings and perceived mistakes like that, but they seemed genuine in what they wrote and they raised incredibly important concerns that are difficult in nature to bring up. Also, with hindsight especially, it seems like they had valid reasons to be concerned about Altman's power-seeking tendencies!

When someone expresses legitimate-given-the-situation concerns about your alignment and your reaction is to basically gaslight them into thinking they did something wrong for finding it hard to trust you, and then you make it seem like you are the poor victim who needs 10 days off of work to figure out whether you can still trust them, that feels messed up! (It's also a bit hypocritical because the whole "I ne... (read more)

Do we know anything about why they were concerned about an AGI dictatorship created by Demis?

5MondSemmel
Presumably it was because Google had just bought DeepMind, back when it was the only game in town?
6Jelle Donders
I get their concerns about Google, but I don't get why they emphasize Demis. Makes it seem like there's more to it than "he happens to be DeepMind's CEO atm"
6Nathan Helm-Burger
The fact that Demis is a champion Diplomacy player suggests that there is more to him than meets the eye. Diplomacy is a game won by pretending to be allies with as many people as possible for as long as possible before betraying them at the most optimal time. Infamous for harming friendships when played with friends. Not that I think this suggests Demis is a bad person, just that there is reason to be extra unsure about his internal stance from examining his public statements. Edit: @lc gave a 'skeptical' react to this comment. I'm not sure which bit is causing the skepticism. Maybe lc is skeptical that being a champion level player in games of strategic misdirection is reason to believe someone is skilled at strategic misdirection? Or maybe the skepticism is about this being relevant to the case at hand? Perhaps the people discussing Demis and ambitions of Singleton-creation and world domination aren't particularly concerned about specifically Demis, but rather generally about an individual competent and ambitious enough to pull off such a feat? I dunno. I feel more inclined to put my life in Demis' hands than Sam's or Elon's if forced to make a choice, but I would prefer not to have to. I also would take any of the above having Singleton-based Decisive Strategic Advantage over a nuclear-and-bioweapon-fought-WWIII. So hard to forsee consequences, and we have only limited power as bystanders. Not no power though. From Wikipedia: https://en.m.wikipedia.org/wiki/Demis_Hassabis Chess: achieved Master standard at age 13 with ELO rating of 2300 (at the time the second-highest in the world for his age after Judit Polgár)[144] Diplomacy: World Team Champion in 2004, 4th in 2006 World Championship[145] Poker: cashed at the World Series of Poker six times including in the Main Event[146] Multi-games events at the London Mind Sports Olympiad: World Pentamind Champion (five times: 1998, 1999, 2000, 2001, 2003)[147] and World Decamentathlon Champion (twice: 2003, 2004

I sometimes feel we spend too much time on philosophy and communication in the x-risk community. But thinking through the OpenAI drama suggests that it's crucial.

Now the world is in more and more immediate danger because a couple of smart guys couldn't get their philosophy or their communication right-enough, and didn't spend the time necessary to clarify. Instead Musk followed his combative and entrepreneurial instincts. The result was dramatically heating up the race for AGI, which previously had no real competition to DeepMind.

OpenAI wouldn't have launched without Musk's support, and he gave it because he was afraid of Larry Page being in charge of a successful Google AGI effort.

From Musk's interview with Tucker Carlson (automated transcript, sorry!):

I mean the the reason open AI exists at all is that um Larry Paige and I used to be close friends and I would stay at his house in pal Alto and I would talk to him late into the night about uh AI safety and at least my (01:12) perception was that Larry was not taking uh AI safety seriously enough um and um what did he say about it he really seemed to be um one want want sort of digital super intelligence basically digital God if you

... (read more)

I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk's true and only reason for founding OpenAI, then I agree that this was a communication fuckup.

However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it's not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of "joining someone else's project to improve it" solutions.

I don't think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:

  • Ambitious people start their own thing rather than join s
... (read more)

I totally agree. And I also think that all involved are quite serious when they say they care about the outcomes for all of humanity. So I think in this case history turned on a knife edge; Musk would've at least not done this much harm had he and Page had clearer thinking and clearer communication, possibly just by a little bit.

But I do agree that there's some motivated reasoning happening there, too. In support of your point that Musk might find an excuse to do what he emotionally wanted to anyway (become humanity's savior and perhaps emperor for eternity): Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger - Hassabis' values appear to be quite standard humanist ones, so you'd think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself. So yes, I do think Musk, Altman, and people like them also have some powerful emotional drives toward doing grand things themselves.

It's a mix of motivations, noble and selfish, conscious and unconscious. That's true of all of us all the time, but it becomes particularly salient and worth analyzing when the future hangs in the balance.

5jd2020j
From my observations and experiences, i don’t see sincere ethics motivations anymore. * I see Elon gaslighting about the LLM-powered bit problem on the platform he bought. * Side note: Bots interacting and collecting realtime novel human data over time is hugely valuable from a training perspective. Having machines simulate all future data is fundamentally not plausible, becuase it can’t account for actual human evolution over generations. * X has also done maximally the for-profit motivated actions, at user expense and cost. For instance: allowing anyone to get blue checks by buying them. This literally does nothing. X’s spin that it helps is deliberate deception becuase they aren’t actual dumb guys. With the amount of stolen financial data and PII for sale on the black market, there’s literally 0 friction added for scammers. * Scammers happen to have the highest profit margins, so, what they’ve done is actually made it harder for ethics to prevail. Over time, I, an ethical entrepreneur or artist, must constantly compromise and adopt less ethical tactics to keep competitive with the ever accelerating crop of crooks who keep scaling their LLM botnets competing against each other. It’s a forcing function. Why would X do this? Profit. As that scenario scales, so does their earnings. (Plus they get to train models on all that data) (that only they own) Is anything fundamentally flawed in that logic? 👆 Let’s look at OpenAI, and importantly, their chosen “experimental partnership programs” (which means: companies they give access to the unreleased models the public can’t get access to). Just about every major YC or silicon valley venture backed player that has emerged over the past two years has had this “partnership” privileged. Meanwhile, all the LLM bots being deployed untraceably, are pushing a heavy message of “Build In Public”. (A message all the top execs at funds and frontier labs also espouse) …. So…. Billionaires and giant corporations get to collud

I think you're assuming a sharp line between sincere ethics motivations and self-interest. In my view, that doesn't usually exist. People are prone to believe things that suit their self-interest. That motivated reasoning is the biggest problem with public discourse. People aren't lying, they're just confused. I think Musk definitely and even probably Altman believe they're doing the best thing for humanity - they're just confused and not taking the effort to get un-confused.

I'm really sorry all of that happened to you. Capitalism is a harsh system, and humans are harsh beings when we're competing. And confused beings, even when we're trying not to be harsh. I didn't have time to go through your whole story, but I fully believe you were wronged.

I think most villains are the heroes of their own stories. Some of us are more genuinely altruistic than others - but we're all confused in our own favor to one degree or another.

So reducing confusion while playing to everyone's desire to be a hero is one route to survival.

3M. Y. Zuo
I would perhaps go even farther, most, maybe all, people don’t have any ‘sincere ethics’ whatsoever with a sharp boundary line, it’s nearly always quite muddled. At least judging by actions. Which upon reflection makes it sorta amazing any complex polity functions at all. And in any case it’s probably safer to assume in business dealings that the counterparty is closer to the 50th percentile than to the 99.99th percentile.
5simon
  It seems the concern was that DeepMind would create a singleton, whereas their vision was for many people (potentially with different values) to have access to it. I don't think that's strange at all - it's only strange if you assume that Musk and Altman would believe that a singleton is inevitable. Musk: Altman:
8Seth Herd
That makes sense under certain assumptions - I find them so foreign I wasn't thinking in those terms. I find this move strange if you worry about either alignment or misuse. If you hand AGI to a bunch of people, one of them is prone to either screw up and release a misaligned AGI, or deliberately use their AGI to self-improve and either take over or cause mayhem. To me these problems both seem highly likely. That's why the move of responding to concern over AGI by making more AGIs makes no sense to me. I think a singleton in responsible hands is our best chance at survival. If you think alignment is so easy nobody will screw it up, or if you strongly believe that an offense-defense balance will strongly hold so that many good AGIs safely counter a few misaligned/misused ones, then sure. I just don't think either of those are very plausible views once you've thought back and forth through things. Cruxes of disagreement on alignment difficulty explains why I think anybody who thinks alignment is super easy is overestimating their confidence (as is anyone who's sure it's really really hard) - we just haven't done enough analysis or experimentation yet. If we solve alignment, do we die anyway? addresses why I think offense-defense balance is almost guaranteed to shift to offense with self-improving AGI, meaning a massively multipolar scenario means we're doomed to misuse.   My best guess is that people who think open-sourcing AGI is a good idea either are thinking only of weak "AGI" and not the next step to autonomously self-improving AGI, or they've taken an optimistic  guess at the offense-defense balance with many human-controlled real AGIs. 
3TristanTrim
There may also be a perceived difference between "open" and "open-source". If the goal is to allow anyone to query the HHH AGI, that's different from anyone being able to modify and re-deploy the AGI. Not that I think that way. In my view the risk that AGI is uncontrollable is too high and we should pursue an "aligned from boot" strategy like I describe in: How I'd like alignment to get done

This NYT article (archive.is link) (reliability and source unknown) corroborates Musk's perspective:

As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.

If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.

With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.

That insult, Mr. Musk said later, was “the last straw.”

And this article from Business Insider also contains this context:

Musk's biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk. Isaacson wrote that Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."

Musk's birthday bash was not the on

... (read more)
4Seth Herd
Very interesting. This does imply that Page was pretty committed to this view. Note that he doesn't explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value. I think that's a foolish thing to assume and a foolish aspect of the question to overlook. That's why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that's now putting as all at risk.
4MondSemmel
It seems to me like the "more careful philosophy" part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I'm very skeptical of all three. Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making. Counterexample to b): it is a hard problem to identify expertise in domains you're not an expert in. Counterexample to c): from what I understand, in 2014, most of academia did not share EY's and Bostrom's views.
6Seth Herd
What I'm saying is that the people you mention should put a little more time into it. When I've been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth. I think most of the world doesn't take philosophy seriously, and they should. I think the world thinks "there aren't real answers to philosophical questions, just personal preferences and a confusing mess of opinions". I think that's mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW - because I took the questions seriously and was truth-seeking. I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it - and took it seriously (were truth-seeking). What would change people's attitudes? Well, I'm hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
8Thane Ruthenis
I don't see reasons to be so confident in this optimism. If I recall correctly, Robin Hanson explicitly believes that putting any constraints on future forms of life, including on its values, is undesirable/bad/regressive, even though lack of such constraints would eventually lead to a future with no trace of humanity left. Similar for Beef Jezos and other hardcore e/acc: they believe that a worthy future involves making a number go up, a number that corresponds to some abstract quantity like "entropy" or "complexity of life" or something, and that if making it go up involves humanity going extinct, too bad for humanity. Which is to say: there are existence proofs that people with such beliefs can exist, and can retain these beliefs across many years and in the face of what's currently happening. I can readily believe that Larry Page is also like this.
6ryan_b
I'm not familiar with the details of Robin's beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He's spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
6Seth Herd
Maybe Page does believe that. I think it's nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do. I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable. Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he's not clear on it. It's bad philosophy, because it's taking a backseat to arguments. This is why I want to assume that Page would converge to the common belief: so we don't mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge. Addenda on why I think beliefs on this topic converge with additional thought: I don't think there's a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we'd care about something that acts sort of like a sentient being, but internally just isn't one, is an easy mistake to make without managing to imagine that scenario in adequate detail.
7PeterH
The passage you quote earlier suggests that they had multiple lengthy conversations: Quick discussions via email are not strong evidence of a lack of careful discussion and reflection in other contexts.
4Seth Herd
I agree that there was a lot more to that exchange than that quick summary. My point was that there wasn't enough or it wasn't careful enough.
5Jonas Hallgren
Do you have any thoughts on what this actionably means? For me it seems a bit like being able to influence such coversations is potentially a bit intractable but maybe one could host forums and events for this if one has the right network? I think it's a good point and I'm wondering about how it actionably looks, I can see it for someone with the right contacts and so the message for people who don't have that is to create it or what are your thoughts there?

This is a great question. I think what we can do is spread good logic about AGI risks. That is tricky. Outside of the LW audience, getting the emotional resonance right is more important than being logically correct. And that's a whole different skill.

My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences. Leahy is better but may also be making things worse by occasionally losing his cool and coming off as a bit of an asshole. People then associate the whole idea of AI safety with "these guys who talk down to us and seem mean and angry". Then motivated reasoning kicks in and they're oriented to trying to prove them wrong instead of discover the truth.

That doesn't mean logical arguments don't count with normies; they do. But the logic comes into play lots more when you're not counted as dangerous or an enemy by emotional processing.

So just repeating the basic arguments of "something smarter will treat us like we do animals by default" and "surely we all want the things we love now to survive AGI" while also being studiously nice is my best guess at the r... (read more)

9RobertM
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
6Seth Herd
Good suggestion, thanks and I'll do that. I'm not commenting on those who are obviously just grinding an axe; I'm commenting on the stance toward "doomers" from otherwise reasonable people. From my limited survey the brand of x-risk concern isn't looking good, and that isn't mostly a result of the amazing rhetorical skills of the e/acc community ;)
[-]Askwho370

I've turned this into a full cast recording with ElevenLabs, with individual voices for all the players:
https://open.substack.com/pub/askwhocastsai/p/openai-email-archives-from-musk-v
 

"Temptations are bound to come, but woe to anyone through whom they come." Or to translate from New Testament into something describing the current situation: you should accept that AI will come, but you shouldn't be the one who hastens its coming.

Yes, this approach sounds very simple and naive. The people in this email exchange rejected it and went for a more sophisticated one: join the arms race and try to steer it. By now we see that these ultra-smart and ultra-rich people made things much worse than if they'd followed the "do no evil" approach. If this doesn't prove the "do no evil" approach, I'm not sure what will.

5Jelle Donders
They've shortened timelines, yes, but what's really the counterfactual here? "Getting AGI 5 years later with more compute overhang and controlled by people that don't care about the longterm future of humanity because anyone that did ran away" doesn't sound like an obviously better world.
-6[comment deleted]
[-]cata3113

It seems like Musk in 2018 dramatically underestimated the ability of OpenAI to compete with Google in the medium term.

Yes, it sounds that he put too much stock into Andrej's paper-counting argument, and then even left the board because he didn't want to be associated with a failing company?

3Lee.aao
Rather, they didn't foresee the possibility that Microsoft might want to invest. And they didn't consider that capped-for-profit was a path to billions of dollars.
[-]Soli176

These emails leave me wondering: Greg and Ilya were initially on the same team, did Greg later side with Sam because Ilya betrayed them both, or did Greg and Sam naturally grow closer over time?

Regardless of what happened, Ilya leaving OpenAI was a huge loss - he seemed to genuinely believe in the original mission, unlike Sam and Elon who seemed to be at least partly motivated by personal ambition and ego.​​​​​​​​​​​​​​​​

FYI it seems like this (important-seeming) email is missing, though the surrounding emails in the exchange seem to be present. (So maybe some other ones are missing too.)

Fixed! That specific response had a very weird thread structure, so makes sense the AI I used got confused. Plausible something else is still missing, though I think I've now read through all the original PDFs and didn't see anything new.

Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015 6:11 PM

In response to this follow up, Elon first mentions that $100M is not enough. And that he is encouraging OpenAI to raise more money on their own and promises to increase the amount they can raise to $1B.

I found this on the OpenAI blog: https://openai.com/index/openai-elon-musk/
There is a couple of other messages there. With the vibe that OpenAI team felt a betrayal from Elon.

We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.

 

@habryka can you pls check the link? I think these messages could have added more context. Not sure why they weren't also included in the original source, though.

No one has picked up the true origin of OpenAI yet. If you dig in to it, you will see some revealing declarations and emails. The whole idea for a non-profit open AI organization and the commitment to share the benefits with humanity by the name of Open AI came through theft. 

(1) Bloomberg tells part of the story: https://archive.ph/1wEOX  

(2) The 100-page lawsuit of Guy Ravine vs Sam Altman and Greg Brockman tells more of the story (link from here): https://guyravine.com/ 

(3) They also filed a narrower complaint: https://storage.courtlistener.com/recap/gov.uscourts.cand.416410/gov.uscourts.cand.416410.103.0.pdf 

5Seth Herd
Rings true. I'm not sure it pushes me much on the ethics of OpenAI; somebody else had a good idea for a philosophy and a name to push for AI in a certain (maybe dumb) direction; they recognized it as a good idea and appropriated it for their own similar project. Should they have used a more different name? Probably. Should they have used a more different philosophical argument? No. Should they have brought Guy Ravine on board? Probably not; his vision for how the thing would actually go was very different from theirs, and none of his skills were really that relevant. He'd have been in arguments with them from the start. Is this the right way for industry to work? Nope. But nobody knows how to properly give credit for good but broad ideas. None of this is to endorse anything or anyone related to OpenAI, just to say it's pretty standard practice.
2John Wiseman
Found a link to the 100-page lawsuit of Guy Ravine vs Sam Altman and Greg Brockman here: https://guyravine.com/  They also filed a narrower complaint: https://storage.courtlistener.com/recap/gov.uscourts.cand.416410/gov.uscourts.cand.416410.103.0.pdf 
-4Bubber Ducky
Ravine’s story is depressing. He is blind to his own lies at this point. He is blatantly patent trolling. He wants to be famous, he wants to be important, he has made no effort to utilize open.ai beyond fabricating screenshots for the USPTO and siphoning users from Openai.com. He didn’t get his idea stolen, he just picked a good name.
5Raemon
Noting, this doesn't really engage with any of the particular other claims in the previous comment's link, just makes a general assertion. 
1John Wiseman
Read the 100 page complaint. He came up with OpenAI as a non-profit for the benefit of humanity. Altman and Brockman stole the idea, name and founding principles from him, and rushed to announce an identical effort before Google Research backed his OpenAI. The point: The idea for OpenAI and the founding principles to operate as a non-profit for the benefit of humanity came through theft. It wasn't Altman or Brockman's idea to begin with. So it is not surprising that they betrayed the mission. The idea was powerful because it was a way to recruit people and get Musk's involvement. And now they live with the consequences of taking another person's vision and attempting to pivot to be a for-profit company that has erased its founding commitments. By the way, he was there before Brockman and Altman who stole it from him, not the other way around, and was sued by OpenAI, not the other way around, so it is bizarre that you claim he is trolling.

does anyone have other examples of documents like this, records of communications that shaped the world? it feels somewhat educational, seeing what it looks like when powerful people are doing the things that make them powerful.

8Casey B.
this account is pretty good, but not always up to the standard of "shaping the world" (you will have to scroll to get past their coverage of this same batch of openAI related emails): https://x.com/TechEmails  their substack: https://www.techemails.com/ 

So there was an explicit emphasis on alignment to the individual (rather than alignment to society, or the aggregate sum of wills). Concerning. The approach of just giving every human an exclusively loyal servant doesn't necessarily lead to good collective outcomes, it can result in coordination problems (example: naive implementations of cognitive privacy that allow sadists to conduct torture simulations without having to compensate the anti-sadist human majority) and it leaves open the possibility for power concentration to immediately return.

Even if you... (read more)

3Haiku
Not building a superintelligence at all is best. This whole exchange started with Sam Altman apparently failing to notice that governments exist and can break markets (and scientists) out of negative-sum games.
[-]jbash7-34

I used AI assistance to generate this, which might have introduced errors.

Resulting in a strong downvote and, honestly, outright anger on my part.

Check the original source to make sure it's accurate before you quote it: https://www.courtlistener.com/docket/69013420/musk-v-altman/ [1]

If other people have to check it before they quote it, why is it OK for you not to check it before you post it?

[-]jbash333

I seem to have gotten a "Why?" on this.

The reason is that checking things yourself is a really, really basic, essential standard of discourse[1]. Errors propagate, and the only way to avoid them propagating is not to propagate them.

If this was created using some standard LLM UI, it would have come with some boilerplate "don't use this without checking it" warning[2]. But it was used without checking it... with another "don't use without checking" warning. By whatever logic allows that, the next person should be able to use the material, including quoting or summarizing it, without checking either, so long as they include their own warning. The warnings should be able to keep propagating forever.

... but the real consequences of that are a game of telphone:

  1. An error can get propagated until somebody forgets the warning, or just plain doesn't feel like including the warning, and then you have false claims of fact circulating with no warning at all. Or the warning deteriorates into "sources claim that", or "there are rumors that", or something equally vague that can't be checked.
  2. Even if the warning doesn't get lost or removed, tracing back to sources gets harder with each step in the
... (read more)

FWIW, my best guess is the document contains fewer errors than having a human copy-paste things and stitch it together. The errors have a different nature to them, and so it makes sense to flag them, but like, I started out with copy-pasting and OCR, and that did not actually have an overall lower error rate.

OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn't punish them by giving them even more work.

3habryka
Because I said prominently at the top that I used AI assistance for it. Of course, feel free to do the same.

Exhibit 13 is a sort of Oppenheimer-meets-Truman email thread in which Ilya Sutskever says:

Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake.

Today, OpenAI republished that email (along with others) on its website (archived). But the above sentence is different in OpenAI's version of the email:

Yesterday while we were considering making our final commitment (even the non-solicit agreement), we realized we’d made a mistake.

I wonder which sentence is the one Ilya actually wr... (read more)

4habryka
My bet would be on the Musk lawsuit document being correct. The OpenAI emails seemed edited in a few different ways (and also had a kind of careless redaction failures).

OpenAI released another set of emails here. I haven't looked through them in detail but it seems that they contain some that are not already in this post.

4habryka
Yep! I am working on updating this post with the new emails (as well as the emails from the March OpenAI blogpost that also had a bunch of emails not in this post).
6habryka
Update: This is now done!
[-]rtviii6-1

The technology would be owned by the foundation and used “for the good of the world”, and in cases where it’s not obvious how that should be applied the 5 of us would decide
                                                                                         - Vladimir Ilyich Lenin

0blf
This quote is unsourced and cannot be found through a few online searches.  It may be fake.
7FeepingCreature
It's a historic joke. The quote is from the emails. (I think) Attributing it to Lenin references the degree to which the original communists were sidelined by Stalin, a more pedestrian dictator; presumably in reference to Sam Altman.

Been thinking a lot about whether it's possible to stop humanity from developing AI.

I think the answer is almost definitely not.

Interesting that the very first thing he discusses is whether AI can be stopped

8Nishant Chandra
He may be pandering to Elon’s pov where Elon initially thought a lot about ways of stopping AGI and then moved to if you can’t beat them, join them with Neuralink etc. Elon may have communicated the same to Sam and that’s where Sam starts off. Sam I don’t think would’ve seriously thought about stopping the AI.

From a historical perspective this is an excellent treasure cache. Truly when you are the cutting edge of something ideas, relationships, personality, and economics all truly come together to drive history.

A much smaller subset was also published here, but does include documents:

https://www.techemails.com/p/elon-musk-and-openai?r=1jki4r 

Compute is used in two ways: it is used to run a big experiment quickly, and it is used to run many experiments in parallel.

95% of progress comes from the ability to run big experiments quickly. The utility of running many experiments is much less useful. In the old days, a large cluster could help you run more experiments, but it could not help with running a single large experiment quickly.

For this reason, an academic lab could compete with Google, because Google's only advantage was the ability to run many experiments. This is not a great advanta

... (read more)

We had originally just wanted space cycles donated

I think this is a mistake, and it should be "spare cycles" instead.

[This comment is no longer endorsed by its author]Reply
9habryka
It is what it is in the original:  (I left all typos unedited, IMO the presence of typos is an interesting data point about the attention that went to stuff like typos in these high stakes negotiations)
2niplav
Huh, thanks, nevermind.

Elon Musk to: Ilya Sutskever, Greg Brockman, Sam Altman - Feb 19, 2016 12:05 AM

Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn't sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach

Ilya Sutskever to: Elon Musk, (cc: Greg Brockman, Sam Altman) - Feb 19, 2016 10:28 AM

Several points:

It is not the case that once we solve "concepts," we get AI. Other problems that will have to be solved include unsu

... (read more)

We believe AI should be an extension of individual human wills and, in the spirit of liberty, not be concentrated in the hands of the few.

 

This is so wrong. Kindly and discreetly I must assert: we're overly and unknowingly (or willfully) commodified and as a result incapable of discerning our "free" will from advertising, social, political. Perhaps they didn't know it.