Judd Rosenblatt

CEO at AE Studio

Wiki Contributions

Comments

We plan to announce further details in a later post.

Thanks, appreciate your wanting these efforts not discouraged!

I agree there's certainly a danger of AI safety startups optimizing for what will appeal to investors (not just with risk appetite but in many other dangerous ways too) and Goodharting rather than focusing purely on the most impactful work.  

VCs themselves tend not to think as long-term as they should (even for their own economic interests), but I'm hopeful we can build an ecosystem around AI safety where they do more. Likely, the investors interested in AI safety will be inclined to think more long-term. The few early AI safety investors that exist today certainly are.

I do think it's crucial (and possible!) for founders in this space to be very thoughtful about their true long-term goals and incentives around alignment and to build the right structures around AI safety for-profit funding.

On your diversification point, for example, a windfall trust-like thing for all AI safety startups to share in the value each other create could make a lot of sense considering just a very tiny bit of equity in the biggest winners may be quickly larger than our entire economy today.

Also, inadequate equilibria are too bad, yeah, but inadequate equilibria apply to all orgs, not just startups. We pointed out in the post above 

We think that as AI development and mainstream concern increase, there’s going to be a significant increase in safety-washing and incentives pushing the ecosystem from challenging necessary work towards pretending to solve problems. We think the way to win that conflict is by showing up, rather than lamenting other people’s incentives. This problem isn’t limited to business relationships; safety-washing is a known problem with nonprofits, government regulations, popular opinion, and so on. Every decision-maker is beholden to their stakeholders, and so decision quality is driven by stakeholder quality.

In fact, startups can be a powerful antidote to inadequate equilibria. I think often the biggest opportunities for startups are actually solving inadequate equilibria, especially leveraging technology shifts/innovations, like electric cars. Ideal new structures to facilitate and govern maximal AI safety innovation would help fast-track solutions around these inadequate equilibria. In contrast, established systems are more prone to yielding inadequate equilibria due to their resistance to change.

I also think we may be underestimating how much people may come together to try to solve these problems as they increasingly come to take them seriously. Today at LessOnline, an interesting discussion I heard was about how surprised AI safety people are that the general public seems so naturally concerned about AI safety upon hearing about it.

This makes me hopeful we can create startups and new structures that help address inadequate equilibria and solve AI safety, and I think we ought to try.

Yes, you're right, and most startups do fail. That's how it works! 

Still, the biggest opportunities are often the ones with the lowest probability of success, and startups are the best structures to capitalize on them. This paradigm may fit well to AI safety.

Ideally we can engineer an ecosystem that creates enough that do succeed and substantially advance AI safety.  Seems to me that aggressively expanding the AI safety startup ecosystem is one of the highest-value interventions available right now.

Meanwhile, strongly agreed that AI safety driven startups should be B corps, especially if they're raising money.

This is a great point. I also notice that a decent number of people's risk models change frequently with various news, and that's not ideal either, as it makes them less likely to stick with a particular approach that depends on some risk model. In an ideal world we'd have enough people pursuing enough approaches with most possible risk models that it's make little sense for anyone to consider switching. Maybe the best we can approximate now is to discuss this less.

That would be great! And it's exactly the sort of thing we've dreamed about building at AE since the start.

Incidentally, I've practiced something (inferior) like this with my wife in the past and we've gotten good at speaking simultaneously and actually understanding multiple threads at the same time (though it seems to break down if one of the threads is particularly complex).

It seems like an MVP hyperphone could potentially just be a software project/not explicitly require BCI (though certainly would be enhanced with it). We would definitely consider building it, at least as a Same Day Skunkworks.  Are you aware of any existing tool that's at all like this?

You might also enjoy this blog post, which talks about how easily good ideas can be lost and why a tool like this could be so high value.

My favorite quotes from the piece:

1. "While ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished."
2. "You need to recognize those barely formed thoughts, thoughts which are usually wrong and poorly formed in many ways, but which have some kernel of originality and importance and truth. And if they seem important enough to be worth pursuing, you construct a creative cocoon around them, a set of stories you tell yourself to protect the idea not just from others, but from your own self doubts. The purpose of those stories isn't to be an air tight defence. It's to give you the confidence to nurture the idea, possibly for years, to find out if there's something really there.
3. "And so, even someone who has extremely high standards for the final details of their work, may have an important component to their thinking which relies on rather woolly arguments. And they may well need to cling to that cocoon. Perhaps other approaches are possible. But my own experience is that this is often the case."

And another interesting one from the summit:

“There was almost no discussion around agents—all gen AI & model scaling concerns.

It’s perhaps because agent capabilities are mediocre today and thus hard to imagine, similar to how regulators couldn’t imagine GPT-3’s implications until ChatGPT.” - https://x.com/kanjun/status/1720502618169208994?s=46&t=D5sNUZS8uOg4FTcneuxVIg

Right now it seems to me that one of the highest impact things not likely to be done by default is substantially increased funding for AI safety.

I got https://www.pinkshoggoth.com/ inspired by Pink Shoggoths: What does alignment look like in practice?

Right now it's hosting a side project (that may wind up being replaced by new ChatGPT features). Feel free to DM me if you have a better use for it though!