Great write-up! I generally think postmortems and retrospectives are very valuable* and this one does a great job of presenting what you did and lessons learnt. I feel like the lessons you presented are both broadly correct and valuable to have described within the context of your real-world project.
I'm someone who was not in favor of some of your past plans, but having read this postmortem, I'm excited to see what you end up doing in the future. Good luck at the bank!
*I've been collecting a list of postmortem/retrospective posts on LessWrong and I'll be glad to add this one to it.
I've been collecting a list of postmortem/retrospective posts on LessWrong
Is this list publicly available? A search for 'postmortems' on your user page produced no results.
I wasn't thinking of it being publicly available yet, but I'm happy to share. The list is really a sample tag I've been testing with our in-development, early-stage tagging MVP. We probably won't release tagging for several months due to design complexity/risks (assuming we conclude it's the correct choice at all), however you can see this list I've been making here:
https://www.lesswrong.com/tag/postmortems
As you'll see, the UI isn't really complete.
"I’ll be spending my next 5-10 years preparing for a potential new venture" - I'd suggest being careful of swinging too far the other way. The problem with such long timelines is that it can be hard to maintain motivation over such long time periods.
When I declared RAISE, I knew maybe 20 rationalists in the Netherlands. I was a Bachelor’s student coming out of nowhere. I had maybe 10-15 hours per week to spend on this. I had no dedicated co-founders. I had no connections to funders. I didn’t have much of a technical understanding of AI Safety. Coming from this perspective, the project was downright quixotic...
Given this, why do you think the project felt like a good idea at the time?
I suppose I was naive about the amount of work that goes into creating an online course. I had been a student assistant where my professor would meet with me and the other assistants to plan the entirety of the course a day before it started. Of course this was different because there was already a syllabus and the topic was well understood and well demarcated.
Also, I had visited Berkeley around that time, and word was out about a new prediction that the singularity was only 15 years ahead. I felt like I had no choice but to try and do something. Start moving mountains right there and then. Looking back, I suppose I was a little bit too impressed by the fad of the day.
Third reason is that when starting out the project was supposed to be relatively simple and limited in scope, not a full-blown charity, and every step towards making the thing bigger and drawing more resources felt logical at the time.
But to be honest I'm not very good at knowing my true motivations.
I had visited Berkeley around that time, and word was out about a new prediction that the singularity was only 15 years ahead.
Can you say more about this?
Good post!
Maybe this is too nitpicky, but "the most impactful years of your life will be 100x more impactful than the average" is necessarily false, because your career is so short that those years will increase the average. For example, if you have a 50-year career and all of your impact happens during a period of two years, your average yearly impact is 2/50=1/25 times as high as your impact during those two years. However, "the most impactful years of your life will be 100x more impactful than the median" could be true.
Good catch, fixed it.
100x is obviously a figure of speech. I'd love to see someone do some research into this and publish the actual numbers
The number could easily be infinity; I have no problem imagining that most people have zero positive impact for more than half the years of their careers (even the ones that end up having some positive impact overall)
Thanks for writing this! I'm glad you've found a new trajectory, and it looks like you've done a decent amount to process and integrate RAISE not having worked out. Best of luck on the next chapter.
I'm sure that wasn't easy, congrats for going through with it and posting such a transparent write-up of your thinking!
Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.
Since June, RAISE has stopped operating. I’ve taken some time to process things, and now I’m wrapping up.
What was RAISE again
AI Safety is starved for talent. I saw a lot of smart people around me that wanted to do the research. Their bottleneck seemed to be finding good education (and hero licensing). The plan was to alleviate that need by creating an online course about AI Safety (with nice diplomas).
How did it go
We spent a total of ~2 years building the platform. It started out as a project based on volunteers creating the content. Initially, many people (more than 80) signed up to volunteer, but we did not manage to get most of them to show up consistently. We gradually pivoted to paying people instead.
We received a lot of encouragement for the project. Most of the enthusiasm came from people wanting to learn AI Safety. Robert Miles joined as a lecturer. When we reached out to some AI Safety researchers for suggestions on which topics to cover, we readily received helpful advice. Sometimes we also received some funds from a couple of prominent AIS organizations who thought the project could be high value, at least in expectation.
The stream of funding was large enough to sustain about 1 fte working for a relatively low wage. Obtaining it was a struggle: our runway was never longer than 2 months. This created a large attention sink that made it a lot harder to create things. Nearly all of my time was spent on overhead, while others were creating the content. I did not have the time to review much of it.
About 1 year into the project, we escaped this poverty trap by moving to the EA Hotel and starting a content development team there. We went up to about 4 fte, and the production rate shot up leading to an MVP relatively quickly.
How did it end
Before launch, the best way to secure funding seemed to be to just create the damn thing, make sure it’s good, and let it advocate for itself. After launch, a negative signal could not be dismissed as easily.
We got two clear negative signals: one from a major AIS research org (that has requested not to be named), and one from the LTF fund. The former declined to continue their experimental funding of RAISE. The latter declined a grant request. These were clear signals that people in the establishment of AI Safety did not deem the project worth funding, so I reached out for a conversation.
The question was this: “what version of RAISE would you fund?” The answer was roughly that while they agreed strongly with the vision for RAISE, our core product sadly wasn’t coming together in a way that suggested it would be worth it for us to keep working on it. I was tentatively offered a personal grant if I spent it on taking a step back to think hard and figure out what AI Safety needs (I ended up declining for career-strategic reasons).
In another conversation, an insider told us that AI Safety needs to grow in quality more than quantity. There is already a lot of low-quality research. We need AI Safety to be held to high standards. Lowering the bar for a research-level understanding will not solve that.
I decided to quit. I was out of runway, updated towards RAISE not being as important as I thought, and frankly I was also quite tired.
Lessons learned
These are directed towards my former self. YMMV.
Wrapping up
The RAISE Facebook group will be converted into a group for discussing the AI Safety pipeline in general. Let’s see if it will take off. If you think this discussion has merit, consider becoming a moderator.
The course material is still parked right here. Feel free to use it. If you would like to re-use some of it or maybe even pick up the production where it left off, please do get in touch.
Robert has received a grant from the LTF Fund, so he will continue to create high-quality educational content about AI Safety.
I enjoyed being a founder, and feel like I have a comparative advantage there. I’ll be spending my next 5-10 years preparing for a potential new venture. I’ll be building capital and a better model of what needs to be done. I have recently accepted an offer to work as a software developer at a Dutch governmental bank. My first workday was 2 weeks ago.
I would like to thank everyone who has invested significant time and effort and/or funding towards RAISE. I’m forever grateful for your trust. I would especially like to thank Chris van Merwijk, Remmelt Ellen, Rupert McCallum, Johannes Heidecke, Veerle de Goederen, Michal Pokorný, Robert Miles, Scott Garrabrant, Pim Bellinga, Rob Bensinger, Rohin Shah, Diana Gherman, Richard Ngo, Trent Fowler, Erik Istre, Greg Colbourn, Davide Zagami, Hoagy Cunningham, Philip Blagoveschensky, and Buck Shlegeris. Each one of you has really made an outsized contribution, in many cases literally saving the project.
If you have any project ideas and you’re looking for some feedback, I’ll be happy to be in touch. If you’re looking for a co-founder, I’m always open to a pitch.