After looking into the prototype course, I updated upwards on this project, as I think it is a decent introduction to Dylan's Off-Switch Game paper. Could I ask what other stuff RAISE wants to cover in the course? What other work on corrigibility are you planning to cover? (For example Dylan's other work, MIRI's work on this subject and Smitha Mili's paper?)
Could you also write more about who your course is targeting? Why does RAISE believe that the best way to fix the talent gap in AI safety is to help EAs change careers via introductory AI Safety material, instead of, say, making it easier for CS PhD students to do research on AI Safety-relevant topics? Why do we need to build a campus, instead of co-opting the existing education mechanisms of academia?
Finally, could you link some of the mind maps and summaries RAISE has created?
After looking into the prototype course, I updated upwards on this project, as I think it is a decent introduction to Dylan's Off-Switch Game paper. Could I ask what other stuff RAISE wants to cover in the course? What other work on corrigibility are you planning to cover? (For example Dylan's other work, MIRI's work on this subject and Smitha Mili's paper?)
Thank you!
Expecting to know better after getting our hands dirty, we decided to take it one subfield at the time. We haven't decided which subfield to cover beyond Corrigibility. Though a natural choice seems to be Value Learning.
We have identified 9 papers within/adjacent to Corrigibility:
Could you also write more about who your course is targeting? Why does RAISE believe that the best way to fix the talent gap in AI safety is to help EAs change careers via introductory AI Safety material, instead of, say, making it easier for CS PhD students to do research on AI Safety-relevant topics? Why do we need to build a campus, instead of co-opting the existing education mechanisms of academia?
To do our views justice requires a writeup of it's own, but I can give a stub. This doesn't necessarily represent the official view of RAISE, because that view doesn't exist yet, but let me just try to grasp at my intuition here:
First of all, I think both approaches are valid. There are people entrenched in academia who should be given the means to do good work. But there are also people outside of academia that could be given the means to do even better work.
Here's just a first stab at ways in which academia is inadequate:
But hey, these are problems anyone could be having, right? Now the real problem isn't any of these specific bugs. The real problem is that academia is an old bureaucratic institution with all kinds of entrenched interests, and patching it is hard. Even if you jump through all the hoops and do the politics and convince some people, you will hardly gain any traction. Baseline isn't so bad, but we could do so much better.
The real problem that I have with academia isn't necessarily it's current form. It's the amount of optimization power you need to upgrade it.
Finally, could you link some of the mind maps and summaries RAISE has created?
Sure! Here's the work we've done for Corrigibility. I haven't read all of it, so I do not necessarily endorse the quality of every piece. If you'd like to look at the script we used for the first lesson, go to "script drafts" and have a look at script F.
This is excellent, and I am happy that you are working on it.
We do have a rule of not having organizational announcements on the frontpage (for example, we moved both the MIRI and CFAR fundraising posts to the personal section) so I moved this to your personal blog.
Is this rule still in place?
Why do you have this rule? It seems to me like banning organizational announcement will make it much harder to get new initiatives of the ground.
I know that I wrote about this at length somewhere else, but I can't currently find it, so here is a super short summary:
Organizational announcements can still get a lot of traction on people's personal blog. Most people who have enough context to be interested in that kind of announcement have the personal blog filter for the frontpage turned off, so I think this doesn't actually hurt organizations very much (and overall I think it creates an environment of higher trust in which people are much less likely to feel spammed or inundated by requests for donations and organizational announcements, which I think is overall better for communicating about projects and getting early-stage projects off the ground).
Edited to add:
It seems to me like banning organizational announcement will make it much harder to get new initiatives of the ground.
Incidentally, anyone in this space trying to get a new initiative off the ground may want to apply to SurvivalAndFlourishing.org's first funding round. (We'll be providing funding, as well as fiscal sponsorship some administrative support. Applications due by October 1st.)
[Edited to clarify that we won't provide full fiscal sponsorship. We will provide some administrative support via SEE (who is SAF's fiscal sponsor). Project seeking long-term fiscal sponsorship may want to apply directly to SEE (perhaps after bootstrapping via SAF). See more details on our announcement page.]
Epic. I remember talking to some people about this at EA Global last year, and I'm excited to see that you've continued working on it and are ready to double down.
I've donated & shared this article on FB!
9 months ago, LessWrong Netherlands sat down to brainstorm on Actually Trying to make a difference in AI Safety.
Knowing AIS is talent-constrained, we felt that academia wasn’t fit to give the field the attention it deserves.
So we decided to take matters in our own hands, and make the road to AI safety excellence as easy as possible.
Fast forward 9 months, and project RAISE has finished it’s first pilot lesson.
We are of course grateful for all the volunteers that extended a helping hand. But to produce lesson material of the highest quality, we must professionalize.
That is why RAISE is seeking funds to establish itself as the next AIS charity.
Our vision
As quoted from here:
Within the LW community there are plenty of talented people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.
One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
The field of AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let community organisers not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves.
Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
Plenty of measures can be made to make getting into AI safety more like an “It’s a small world”-ride:
- Let there be a tested path with signposts along the way to make progress clear and measurable.
- Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
- Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
What we have done so far
The study group
The bulk of our work has been to take a subfield of AIS (corrigibility), gather all the papers we know of, and turn them into a set of scripts. We have devised an elaborate process for this involving summaries and mind maps. Another strategy would have been to simply copy the structure of existing papers, like in this early iteration, but we think of it as a feature that ideas are individually recompiled and explained. Crafting a good explanation is a creative process: it adds "shortcut" inferences. And so we did.
For the most part, it’s been a success. Since its inception the group has created 9 summaries, 4 mind maps, 12 lecture scripts and 4 paper presentation recordings. It's already a rich store of material to draw from. The scripts are now being converted to lectures.
A software platform
I have met a local EA who runs a platform for teaching statistics, and it’s a close match to our needs. We may use it for free, and the devs are responsive to our feature requests. It will include a white-label option, and the domain name will be changed to something more neutral.
Filming
We enlisted Robert Miles (who you might know from his Youtube channel) to shoot our lectures, and I visited him in London to help build a light board. The light board was a welcome solution to the problem of setup, in which we put considerable thought.
Prototype lesson
These developments culminated in our most tangible output: a prototype lesson. It shows a first glimpse of what the course will eventually look like.
What funding will change
Reduce turnover
While it has proved possible to run entirely on volunteers, it has also been a hurdle. Without the strong accountability mechanism of a contract, we have been suffering from high turnover. This has created some problems:
- The continuous organisational effort required to on-board new volunteers, distracting the team from other work
- An inability to plan ahead too far, not knowing what the study group attendance over a given period would be
- Quality control being somewhat intractable because it takes time to assess the general quality of someone's work
Of the capital we hope to receive, one of it's main allocations will be hiring a content developer. They will oversee the course content development process with a team of volunteers that have proven (or promise) high dedication. Given the increased time spent and reduced overhead, we expect this setup to gain a lot more traction. See the pamphlet here.
(Strongly) increase course quality
With higher net attentional resources coming from hiring someone, and reducing turnover by separating out loyal volunteers, we can do quality control. We will also benefit more from learning effects for the same reasons: with a core team that spends a lot of focused time on crafting good explanations, they might actually get uniquely good at it (that is, better than anyone who didn't do dedicated practice).
(Strongly) increase course creation speed
Right now, the amount of work that goes into creating content is about 4 hours per volunteer per week. As we learned, this is enough to compile a prototype lesson over the course of roughly 3 months. It is reasonable to assume that this time will go down with further iterations (not having to do much trailblazing) and the figure is somewhat misleading because, for about 6 or 7 more lessons, roughly 60% of the work has been done. Still the speed doesn't have the order of magnitude that we would prefer. At this rate, we will be done with corrigibility in about 6 months, and the whole of AI safety in 5+ years. This doesn't seem acceptable. The speed that we prefer, provided that it doesn't hurt quality, is about one unit (like corrigibility) per (at most) 3 months, and the whole course in (at most) 2 years.
Allow us to broaden our set of strategies
The ultimate vision of RAISE isn't a course, it's a campus. Our goal is to to facilitate the training of new AIS researchers by whatever means necessary.
But we can only do as much as our organisational bandwidth allows, and right now it's purely taken up by the creation of a course.
Examples of such strategies are: a central online hub for study groups, creating a licensing center/talent agency that specializes in measuring AIS research talent, and partnering with the EA hotel to provide a free living space for high-performing students.
Projected spending
Of course, all of this is subject to change
Our first target is $30.000 to cover expenses for the coming year. For that amount, we expect to:
Our second target is another $30.000, from which we expect to:
We aren’t too sure about what amount of funding to expect. Should our estimates be too low: returns will not start diminishing until well beyond $200.000.
Call to action
If you believe in us:
I would like to end with a pep talk:
What we are doing here isn’t hard. Courses at universities are often created on the fly by one person in a matter of weeks. They get away with it.
There is little risk. There is a lot of opportunity. If we do this well, we might just multiply the amount of AIS researchers by a significant fraction.
If that’s not impact, I don’t know what is.