I think there's an issue where Alice wants to be told what to do to help with AI safety. So then Bob tells Alice to do X, and then Alice does X, and since X came from Bob, the possibility of Alice helping manifest the nascent future paradigms of alignment is lost. Or Carol tells Alice that AI alignment is a pre-paradigmatic field and she should think for herself. So then Alice thinks outside the box, in the way she already knows how to think outside the box; empirically, most of most people's idea generation is surprisingly unsurprising. Again, this loses the potential for Alice to actually deal with what matters.
Not sure what could be done about this. One thing is critique. E.g., instead of asking "what are some ways this proposed FAI design could go wrong", asking "what is the deepest, most general way this must go wrong?", so that you can update away from that entire class of doomed ideas, and potentially have more interesting babble.
I'm interested in talking with anyone who is looking at the EU EA Hotel idea mentioned in the post. Also I'm working with Rob Miles's community on a project to improve the pipeline, an interactive FAQ system called Stampy.
The goals of the project are to:
- Offer a one-stop-shop for high-quality answers to common questions about AI alignment.
- Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.
- Make external resources more easy to find by having links to them connected to a search engine which gets smarter the more it's used.
- Provide a form of legitimate peripheral participation for the AI Safety community, as an on-boarding path with a flexible level of commitment.
- Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.
- Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.
- Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them.
- Track reactions on messages so we can learn which answers need work.
- Identify missing external content to create.
We're still working on it, but would welcome feedback on how the site is to use and early adopters who want to help write and answer questions. You can join the public Discord or message me for an invite to the semi-private patron one.
Typically they have to do an undergraduate degree, then ML masters, then phD. That’s a long process.
Do you know what this was referring to? Is it referring specifically to orgs that focus on ML-based AI alignment work, or more generally grant-makers who are trying to steer towards good AGI outcomes, etc.? This seems like a fine thing if you're looking for people to work under lead researchers on ML stuff, but seems totally insane if it's supposed to be a filter on people who should be fed and housed while they're trying to solve very difficult pre-paradigmatic problems.
Not exactly sure what I was trying to say here. Probably using the PhD as an example of a path to credentials.
Here are some related things I believe:
Cool. I appreciate you making these things explicit.
If the bottleneck is young people believing that if they work on the really hard problems, then there will be funding for them somehow, then, it seems pretty important for funders to somehow signal that they would fund such people. By default, even using credentials as a signal at all, signals to such young people that this funder is not able/willing to do something weird with their money. I think funders should probably be much more willing to say to someone with (a) a PhD and (b) boring ideas, "No, sorry, we're looking for people working on the hard parts of the problem".
AI safety is obviously super important and super interesting. So if high-potential young people aren't working on it, it's because they're being diverted from obviously super important and super interesting things. Why is that happening?
I conducted the following interviews to better understand how we might aim to improve the AI safety pipeline. It is very hard to summarise these interviews as everyone has different ideas of what needs to be done. However, I suppose there is some value in knowing this in and of itself.
A lot of people think that we ought to grow the field more, but this is exactly the kind of proposal that we should definitely not judge based on popularity. Given the number of people trying to break into the field, such a proposal was almost automatically going to be popular. I suppose a more useful takeaway is that a lot of people are currently seriously pursuing proposals to run projects in this space.
I guess another comment I can make is that the people who are most involved in the field seemed quite enthusiastic about the Cambridge AI Safety Fundamentals course as a way of growing the field.
If you think you’d have interesting thoughts to share, please feel free to PM me and we can set up a meeting.
Disclaimers: These interviews definitely aren’t a random sample. Some of the people were just people I wanted to talk to anyway. I was also reluctant to contact really senior people as I didn’t want to waste their time.
I tended towards being inclusive, so the mere fact that I talked to someone or included some of what they shared shouldn’t be taken as an endorsement of their views or of them being any kind of authority.
Evan Hubringer (MIRI):
Buck Shlegeris (Redwood Research):
Someone involved in the Cambridge AGI Safety Fundamentals Course:
AI Safety Support - JJ Hepburn:
Adam Shimi (Mentor coordinator for AI Safety Camp):
Janus (Eleuther):
Richard (previously a funded AI Safety researcher):
Founder of an Anonymous EA Org:
Remmelt (AI Safety Camp):
Logan (Funded to research Vanessa's work):
Toon (RAISE, now defunct):
John Maxwell:
Anonymous:
Anonymous:
Anonymous:
Anonymous: