This list of field building ideas is inspired by Akash Wasil’s and Ryan Kidd’s similar lists. And just as the projects on those lists, these projects rely on people with specific skills and field knowledge to be executed well.
None of these ideas are developed by me exclusively; they are a result of the CanAIries Winter Getaway, a 2-week-long, Unconference-style AGI safety retreat I organized in December 2022.
Events
Organize a global AGI safety conference
This should be self-explanatory: It is odd that we still don’t have an AGI safety conference that allows for networking and lends the field credibility.
There are a number of versions of this that might make sense:
- an EAG-style conference for people already in the community to network
- an academic-style conference engaging CS and adjacent academia
- an industry-heavy conference (maybe sponsored by AI orgs?)
- a virtual next-steps conference, e.g. for AGISF participants
Some people have tried this out at a local level: https://aisic2022.net.technion.ac.il
(If you decide to work on this: www.aisafety.global is available via EA domains, contact hello@alignment.dev)
Organize AGI safety professionals retreats
As far as I can see, most current AGI safety retreats are optimized for junior researchers: Networking and learning opportunities for students and young professionals. Conferences with their focus on talks and 1-on-1s are useful for transferring knowledge, but don’t offer the extensive ideation that a retreat focused on workshops and discussion rounds could.
Organizing a focused retreat for up to 60-80 senior researchers to debate the latest state of alignment research might be very valuable for memetic cross-pollination between approaches, organizations, and continents. It might also make sense to do this during work days, so that peoples’ employers can send them. I suspect that the optimal mix of participants would be around 80% researchers, and the rest funders, decisionmakers, and the most influential field builders.
Information infrastructure
Start an umbrella AGI safety non-profit organization in a country where there is none
This would make it easier for people to join AGI safety research, and could offer a central exchange hub. Some functions of such an org could include:
- Serving as an employer of records for independent AGI safety researchers.
- Providing a central point for discussions, coworking, publications. You probably want a virtual space to discuss, like a Discord or Slack, named after your country/area and list it on https://coda.io/@alignmentdev/alignmentecosystemdevelopment, then make sure to promote this and have it be discoverable by people interested in the field. The Discord/Slack can then be used to host local language online or in person meetups.
A candidate for doing this mostly needs ops/finance skill, not a comprehensive overview of the AGI safety field.
Mind that Form Follows Function: Try to do this with as little administrative and infrastructure overhead as possible. Find out whether other orgs already offer the relevant services (For example, AI Safety Support offer Ops infrastructure to other alignment projects, and national EA orgs like EA Germany offer employer of record-services). Build MVPs before going big and ambitious.
In general, the cheap minimum version of this would be becoming an AGI Safety Coordinator.
Become an AGI Safety Coordinator
It would be useful to have a known role and people filling the role of Coordinators. These people would not particularly have decision power or direct impact, but their job is to know what everyone is doing in AGI safety, to collect resources, organize them, publish them, to help people know who to work and collaborate with. Ideally, they would also serve as a bridge between the so far under-connected areas of AGI safety and AI policy.
Some of the members of AI Safety Support have been doing similar things, but they are mostly recognized by the community of new members, and might not be utilized by the established organizations and people. The role of the Coordinator is to also be known by the established organizations and people.
Create a virtual map of the world where Coordinators can add themselves
This would make it way easier for people to find each other. In https://eahub.org/, an attempt to gather *all* members of the EA community in one place failed due to buy-in being too costly for individuals.
Instead, we might want to have a database of key coordinators and information nodes in the community. A handful of people would be enough to maintain it, and it probably would never list more than ~200 people, grouped by location, as the go-to addresses for local knowledge.
Create and maintain a living document of AGIS field building ideas
The minimum version of this would be a maintained list of ideas like the ones in Akash’s, Ryan’s, and this post.
Useful functions:
- Anyone can add new ideas
- People can tag themselves as interested in working on/funding a certain idea
- A way to filter by expected quality of ideas. A tremendous and underexplored model for doing this is the EigenKarma system a handful of people are currently developing. See here for a draft.
- a function for commenting on ideas in order to improve them, or to flag ineffective/high-downside-risk ones
A sensible existing project to build this into would be Apart Research’s https://aisafetyideas.com/. While their interface is optimized for research proposals, the list under https://aisafetyideas.com/?categories=Field-Building might be a good minimum viable product for a field building version.
Other examples of living documents that might serve as inspiration for this: https://aisafety.world, https://aisafety.community
Funding
Make it easier for AGI safety endeavors to get funding from non-EA sources
Our primary funding sources have suffered last year, and there are numerous foundations and investors out there happy to invest into potentially world-saving and/or profitable projects. Especially now, it might be high-leverage to collect knowledge and build infrastructure for tapping into these funds. I lack the local knowledge to give recommendations for how to tap into funding sources within academia. However, here are four potential routes for tapping into non-academic funding sources:
1. Offer a service to proofread grant applications and give feedback. That can be extremely valuable for relatively little effort. Many people don't want to send their application to a random stranger, but maybe people know you from the EA forum? Or you can just offer giving feedback to people who already know you.
2. Identify more relevant funding sources and spread knowledge about them. https://www.futurefundinglist.com/ is a great example: It's a list of dozens of longtermist-adjacent funds, both in and outside the community. (Though apparently, it is not kept up-to-date: The FTX Future Fund is still listed as of Jan 19 2023.)
Governments, political parties, and philanthropists often have nation-specific funds happy to subsidize projects. Expanding the Future Funding List further and finding/building similar national lists might be extremely valuable. For example, there is a whole book with funding sources for charity work in German.
3. Become a professional grant writer. A version of this that is affordable for new/small orgs and creates decent incentives and direct feedback for grantwriters might be a prize-based arrangement: Application writers get paid if and only if a grant gets through.
If you are interested in this and already bring an exceptional level of written communication skills, a reasonable starting point may be grant writing courses like https://philanthropyma.org/events/introduction-grant-writing-7.
4. Teach EAs the skills to communicate their ideas to grantmakers. Different grantmakers have different values and lingos. If you want to convince them to give you money, you have to convince them in their world. This is something many AGI safety field builders didn’t have to learn so far. Accordingly, a useful second step after becoming a grant writer yourself might be figuring out how to teach grant writing as effectively as possible to the relevant people. (A LessWrong/EA Forum sequence? Short rainings in pitching and grantwriting?)
Write a guide for how to live more frugally, optimized to the needs of members of the AGI safety community
The more frugally people live, the more independent they are from requiring a day job. In addition, the same amount of grantmaker money could support a larger number of individuals. Accordingly, pushing the idea of frugality by writing an engaging guide for how to do efficient altruism might help us do more research per dollar earned and donated by community members.
Some resources that such a guide should contain:
- CEEALAR (formerly EA Hotel)
- Nonlinear’s EA Houses spreadsheet
- Some tips 80000hours gathered
- Key learnings of the FIRE (financial independence, retire early)-community
Potential downside risk: This route may be particularly attractive to relatively junior people with few connections to the established orgs. Being well-connected in the community is crucial, both for developing good ideas, and for developing the necessary network to get employed later. Accordingly, a good version of this guide would discourage people to compromise too strongly on being close to other community members for the sake of frugality.
Outreach and onboarding
Run the Ops for more iterations of ML4Good
French AGI safety field building org Effiscience has run several iterations of the machine learning bootcamp ML4Good, which teaches technical knowledge as well as AGI safety fundamentals, so as to produce more AGI safety researchers or research engineers. It’s got a proven track record of getting people involved and motivated to do more AGI safety work (see the writeup for details), and can dispatch instructors to teach these bootcamps. Thus, the constraint to scale is having organizers run the operations work (promoting the event, getting inscriptions, getting an event location…) to run new iterations in various countries.
If interested, contact jonathan.claybrough[at]gmail.com
Set up a guest appearance of an AI safety researcher with exceptional outreach skills on a major Street Epistemology YouTube channel
For preventing a negative singularity, AGI safety research must move faster than capabilities research. Two attack routes for that are a) to speed up AGI safety research, and b) to slow down capabilities research. One way to do b) would be to get more capabilities researchers to be concerned about AGI safety. The general community consensus seems to be that successful outreach to capabilities researchers would be extremely valuable, and that unsuccessful outreach would be extremely dangerous. Accordingly, hardly anyone is working on this.
Street Epistemology is the atheist response to Christian street preachers. Street epistemologists use the socratic method to assist people in questioning why they believe what they believe, often leading to updates in confidence. More info on https://streetepistemology.com/
Bringing more SE skills into the AGI safety community, or more capable Street Epistemologists into AGI safety, might help us make it sufficiently safe to do outreach to capabilities researchers. Bonus: Street Epistemologists only need just enough object-level knowledge of the topic at hand to be able to follow their conversation partner, not to argue against them. Accordingly, a solid understanding of SE and a basic background in machine learning might be enough to have useful and low-risk SE-style conversations with capabilities researchers.
A core route of memetic exchange within the SE community are a number of YouTube channels where street epistemologists film themselves nudging strangers to examine their core beliefs. If an AGI safety researcher with great teaching skills were to appear as a conversation partner on one of these channels, that might get more Street Epistemologists concerned enough that they join the AGI safety community and spread their memeplex.
Do workshops/outreach at good universities in EA-neglected and low/middle income countries
(E.g India, China, Japan, Eastern Europe, South America, Africa, …)
Talent is spread way more evenly across the globe than our outreach and recruitment strategies. Expanding those to other countries might be a high-leverage opportunity to increase the talent inflow into AGI safety. For example, Morocco, Tunisia, and Algeria have good math unis.
One low-hanging fruit here might be to pay talented graduates for a fellowship at leading AGI Safety labs.
Improve the landscape of AGI safety curricula
Get an overview of the existing AGI safety curricula. Find out what’s missing, e.g. for particular learning styles/levels of seniority. Make it exist.
Publishing mediocre curricula is probably net negative at this point, because it draws attention away from the already existing good ones. What is needed at this point of the alignment curricula landscape is careful vetting, identifying gaps, and filling them with new well-written, well-maintained curricula. Particularly, we might need more curricula on AI governance, or on foundational concepts for field building, with curated resources on topics like MVP-building, project management, the existing infrastructure, etc.
For hints on what specifically is missing, this LW post on the 3-books-technique for learning a new skill might be a useful framework. Also, mind that different people have different learning styles: Some learn best through videos, others through text, audio, or practical exercises.
Some great examples for curricula:
- AGISF, and AGISF 201
- Levelling up in AI safety Research Engineering
- https://course.mlsafety.org/
- https://github.com/jacobhilton/deep_learning_curriculum
- https://readingwhatwecan.com/
Other
Support AGI safety researchers with your skills
There are countless ways to help AGI safety researchers through non-AGI safety-related skills so that they have more time and energy for their work. Make yourself easily findable and approachable.
Bonus points for creating infrastructure to enable this. One version would be a Google form/sheet where people can add their respective skills.
Existing services include:
- Free Health Coaching for AGI safety researchers
- EA Mental Health Navigator
Other skills that might be valuable:
- Software support, e.g. python support, pair programming
- Personal assistance
- Productivity coaching
- Tutoring (e.g. math topics, coding, neuroscience, …)
- Visa support
- Tax support
- …
Do this now:
- (1-5 min brainstorming) What skills do you have? Could they be used to support researchers?
Find new AGI safety community building bottlenecks
Survey people for what they need / what their biggest bottlenecks are. People coming out of seri mats etc, working researchers.
General Tips
- Trust your abilities! You might feel like there are other people who would do a better job than you in organizing the project. But: If the project isn’t being done, it looks like whoever could do it is busy doing even more important things.
- Get feedback! If people don’t coordinate, they might try the same thing twice or more often. In addition, especially outreach-related projects can have a negative impact. Things you might want to do if you consider working on outreach-related projects:
- Ask on the AI Alignment Slack.
- Write me a message here, and I’ll connect you to relevant people.
- Cooperate! Launching projects aiming for global optima sometimes works differently than the intuitions we built in competitive settings.
- Make use of the existing infrastructure: Building background infrastructure is costly. Instead of going freelancer/founding a new org, consider reaching out to existing orgs whether it makes sense for them to incorporate your projects. Examples include AI Safety Support, A*PART Research, and Alignment Ecosystem Development, the team behind aisafety.info and other projects.
- Make it easy for people to propose improvements and collaborations to your project: Have an “about”-page, “suggest”-button, admonymous-account, …
- Delegate! As much as possible, as little as necessary.
- If you develop more ideas than you can execute on, write up lists like this one. You could also ask junior researchers/community builders whether they’d be up for picking up your dropped projects.
- If you have the necessary funds, consider hiring a PA via https://pineappleoperations.org/ to do Ops work you don’t have the slack for.
- Test your hypotheses! The Lean Startup approach offers a valuable framework for this. Consider reading some of the relevant literature. The 80/20 version is grokking this artikle by Henrik Kniberg: Making sense of MVP (Minimum Viable Product).
- “Ideas have no value; only execution and people have!” Mind the explore-exploit-tradeoff and actually do what is the best option you currently have available. Collating this list was fun, but if all of us just make lists all day...
Thanks to the following people for their contributions and comments: Jonathan Claybrough, Swante Scholz, Nico Hillbrand, Magdalena Wache, Jordan Pieters, Silvio Martin.
My argument here is very related to what jacquesthibs mentions.
Right now it seems like the biggest bottleneck for the AI Alignment field is senior researchers. There are tons of junior people joining the field and I think there are many opportunities for junior people to up-skill and do some programs for a few months (e.g. SERI MATS, MLAB, REMIX, AGI Safety Fundamentals, etc.). The big problem (in my view) is that there are not enough organizations to actually absorb all the rather "junior" people at the moment. My sense is that 80K and most programs encourage people to up-skill and then try to get a job at a big organization (like Deepmind, Anthropic, OpenAI, Conjecture, etc.). Realistically speaking though, these organizations can only absorb a few people in a year. In my experience, it's extremely competitive to get a job at these organizations even if you're a more experienced researcher (e.g. having done a couple of years of research, a Ph.D., or similar). This means that while there are many opportunities for junior people to get a stand in the field, there are actually very few paths that actually allow you to have a full-time career in this field (this is also for more experienced researchers who don't get a big lab). So the bottleneck in my view is not having enough organizations, which is a result of not having enough senior researchers. Funding an org is super hard, you want to have experienced people, with good research taste, and some kind of research agenda. So if you don't have many senior people in a field, it will be hard to find people that fund those additional orgs.
Now, one career path that many people are currently taking, is being an "independent researcher" and being funded through a grant. I would claim that this is currently the default path for any researcher who do not get a full-time position and want to stay in the field. I believe that there are people out there who will do great as independent researchers and actually contribute to solving problems (e.g. Marius Hobbhahn and John Wenthworth talk bout being an independent researchers). I am however quite skeptical about most people doing independent research without any kind of supervision. I am not saying one can't make progress, but it's super hard to do this without a lot of research experience, a structured environment, good supervision, etc. I am especially skeptical about independent researchers becoming great senior researchers if they can't work with people who are already very experienced and learn from them. Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted.
In terms of career capital, being an independent researcher is also very risky. If your research fails, i.e. you don't get a lot of good output (papers, code libraries, or whatever), "having done independent research for a couple of years" will not sound great in your CV. As a comparison, if you somehow do a very mediocre Ph.D. with no great insights, but you do manage to get the title, at least you have that in your CV (having a Ph.D. can be pretty useful in many cases).
So overall I believe that decision makers and AI field builders should put their main attention on how we can "groom" senior researchers in the field and get more full-time positions through organizations. I don't claim to have the answers on how to solve this. But it does seem the greatest bottleneck for field building in my opinion. It seems like the field was able to get a lot more people excited about AI safety and to change their careers (we still have by far not enough people though). However right I think that many people are kind of stuck as junior researchers, having done some programs, and not being able to get full-time positions. Note that I am aware that some programs such as SERI MATS do in some sense have the ambition of grooming senior researchers. However, in practice, it still feels like there is a big gap right now.
My background (in case this is useful): I've been doing ML research throughout my Bachelor's and Masters. I've worked at FAR AI on "AI alignment" for the last 1.5 years, so I was lucky to get a full-time position. I don't consider myself a "senior" researcher as defined in this comment, but I definitely have a lot of research experience in the field. From my own experience, it's pretty hard to find a new full-time position in the field, especially if you are also geographically constrained.
I'm not sure people seriously thought about this before, your perspective seems rather novel.
I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn't credit OpenAI for "grooming" Paul Cristiano to senior level, but anyways).
It's unclear what field-building projects could incentivise labs ... (read more)