This is a linkpost for https://ssi.inc/

[copy of the whole text of the announcement on ssi.inc, not an endorsement]

Safe Superintelligence Inc.

Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence Inc.

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.

Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy

June 19, 2024

New Comment
36 comments, sorted by Click to highlight new comments since:
[-]William_S135114

If anyone says "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead." you should really ask for the details of what this means, how to measure whether safety is ahead. (E.g. is it "we did the bare minimum to make this product tolerable to society" vs. "we realize how hard superalignment will be and will be investing enough to have independent experts agree we have a 90% chance of being able to solve superalignment before we build something dangerous")

I think what he means is "try to be less unsafe than OpenAI while beating those bastards to ASI".

Come on now, there is nothing to worry about here. They are just going to "move fast and break things"...

I don't trust Ilya Sutskever to be the final arbiter of whether a Superintelligent AI design is safe and aligned. We shouldn't trust any individual, especially if they are the ones building such a system to claim that they've figured out how to make it safe and aligned. At minimum, there should be a plan that passes review by a panel of independent technical experts. And most of this plan should be in place and reviewed before you build the dangerous system.

I don't trust Ilya Sutskever to be the final arbiter of whether a Superintelligent AI design is safe and aligned. We shouldn't trust any individual,

I'm not sure how I feel about the whole idea of this endeavour in the abstract - but as someone who doesn't know Ilya Sutskever and only followed the public stuff, I'm pretty worried that he in particular runs it if decision-making is on the "by an individual" level and even if not. Running this safely will likely require lots of moral integrity and courage. The board drama made it look to me like Ilya disqualified himself from having enough of that.

Lightly held because I don't know the details but just from the public stuff I've seen I don't know why I should at all believe that Ilya has sufficient moral integrity and courage for this project even if he might "mean well" at the moment.

I do hope he will continue to contribute to the field of alignment research.

I am deeply curious who is funding this, considering that there will explicitly be no intermediate product. Only true believers with mindboggling sums of money to throw around would invest in a company with no revenue source. Could it be Thiel? Who else is doing this in the AI space? I hope to see journalists exploring the matter.

Thiel has historically expressed disbelief about AI doom, and has been more focused on trying to prevent civilizational decline. From my perspective, it is more likely that he'd fund an organization founded by people with accelerationist credentials, than by someone who was a part of a failed coup attempt that would look to him like it involved a sincere belief in an extreme difficulty of the alignment problem.

I'd look for funds or VCs that are involved with Israel's tech sector at a strategic level. And who knows, maybe Aschenbrenner's new org is involved. 

[-]O O10

I see Elon throwing money into this. He originally recruited Sutskever and he’s probably(?) smart enough to diversify his AGI bets.

Elon diversifies in the sense of "personally micromanaging more companies", not in the sense of "backing companies he can't micromanage".

Weakly endorsed

“Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was Oh no, not again. Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”

The Hitchhiker’s Guide To The Galaxy, Douglas Adams

I'm not even angry, just disappointed.

[-]habryka5751

I am angry and disappointed.

safety always remains ahead

When was it ever ahead? I mean, to be sure that safety is ahead, you need to first make advancement there compatible with capabilities. And to do that, you shouldn't advance the capabilities.

OpenAI board vs. Altman: Altman "was not consistently candid in his communications with the board".

Ilya's statement on leaving OpenAI:

After almost a decade, I have made the decision to leave OpenAI.  The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm.  It was an honor and a privilege to have worked together, and I will miss everyone dearly.   So long, and thanks for everything.  I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

So, Ilya, how come your next project is an OpenAI competitor? Were you perhaps not candid in your communications with the public? But then why should anyone believe anything about your newly announced organization's principles and priorities?

Paywalled. Would be fantastic if someone with access could summarise the most important bits.

It does not appear paywalled to me. The link that @mesaoptimizer posted is an archive, not the original bloomberg.com article.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

In fairness, there's a high-integrity version of this that's net good:

  1. Accept plenty of capital.
  2. Observe that safety is not currently clearly ahead.
  3. Spent the next n years working entirely on alignment, until and unless it's solved.

This isn't the outcome I expect, and it wouldn't stop other actors from releasing catastrophically unsafe systems, but given that Ilya Sutskever has to the best of my (limited) knowledge been fairly high-integrity in the past, it's worth noting as a possibility. It would be genuinely lovely to see them use a ton of venture capital for alignment work.

I don't even get it. If their explicit plan is not to release any commercial products on the way, then they must think they can (a) get to superintelligence faster than Deepmind, OpenAI, and Anthropic, and (b) do so while developing more safety on the way -- presumably with less resources, a smaller team, and a headstart for the competitors. How does that make any sense?

We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This is the galaxy-brained plan of literally every single AI safety company of note. 

Then again, maybe only the capabilities focused ones become noteworthy. 

[-][anonymous]81

In the spirit of Situational Awareness, I'm curious how people are parsing some apparent contradictions:

  • OpenAI is explicitly pursuing AGI
  • Most/many people in the field (eg. Leopold Aschenbrenner, who worked with Ilya Sutskever) presume that (approximately) when AGI is reached, we'll have automated software engineers and ASI will follow very soon
  • SSI is explicitly pursuing straight-shot superintelligence - the announcement starts off by claiming ASI is "within reach"
  • In his departing message from OpenAI, Sutskever said "I’m confident that OpenAI will build AGI that is both safe and beneficial...I am excited for what comes next - a project that is very personally meaningful to me about which I will share details in due time"
  • At the same time, Sam Altman said "I am forever grateful for what he did here and committed to finishing the mission we started together"

Does this point to increased likelihood of a timeline in which somehow OpenAI develops AGI before anyone else, and also SSI develops superintelligence before anyone else?

Does it seem at all likely from the announcement that by "straight-shot" SSI is strongly hinting that it aims to develop superintelligence while somehow sidestepping AGI (which they won't release anyway) and automated software engineers? 

Or is it all obviously just speculative talk/PR, not to be taken too literally, and we don't really need to put much weight on the differences between AGI/ASI for now? Just seems like more unnecessary specificity than warranted, if that were the case.

One thing I find positive about SSI is their intent to not have products before superintelligence (note that I am not arguing here that the whole endeavor is net-positive). Not building intermediate products lessens the impact on race dynamics. I think it would be preferable if all the other AGI labs had a similar policy (funnily, while typing this comment, I got a notification about Claude 3.5 Sonnet... ). The policy not to have any product can also give them cover to focus on safety research that is relevant for superintelligence, instead of doing some shallow control of the output of LLMs.

To reduce bad impacts from SSI, it would be desirable that SSI also

  • have a clearly stated policy to not publish their capabilities insights,
  • take security sufficiently seriously to be able to defend against nation-state actors that try to steal their insights.

Counterpoint: other labs might become more paranoid that SSI is ahead of them. I think your point is probably more correct than the counterpoint, but it's worth mentioning.

We are assembling a lean, cracked team

This team is going to be cracked.

[-]O O42

OpenAI is closed

StabilityAI is unstable

SafeSI is ...

LessWrong is ...

Is it MoreWrong or MoreRight?

Let's hope not!

Actually, we should hope that LW is very wrong about AI and alignment is easy.

I’ve long taken to using GreaterWrong. Give it a try, lighter and more featureful.

“Safe?” said Mr. Beaver. “Who said anything about safe? 'Course Aslan isn't safe. But he's good. He's the King, I tell you.”

Orwell was more prescient than we could have imagined.

[-]Kyre10

I’m worried about this cracked team.

[+][comment deleted]20