This is a short summary of my experience attending the ML4Good UK bootcamp in September 2024. There are 2 previous experience reports I link to at the bottom, but because the program is refined each time, I wanted to describe my experience and add my two cents. This is useful for you if you are contemplating applying for the camp, or if you want to learn about AI Safety field building efforts. For context, I studied computer science, have been working as a software engineer for a few years and have had a hobby interest in AI safety for about 2 years (e.g., I did the BlueDot impact AI safety fundamentals course).
Overview of the program
The bootcamp is free (including room and board) , and happens over 10 days at CEEALAR in the UK. We had participants from all over Europe from multiple backgrounds, with most people about to finish or just having finished their degrees. Majors skewed towards computer science/maths-y degrees, but there were plenty of exceptions and any background is welcome. Compared to previous iterations, the program density was somewhat reduced. Our courses ran from 9am-7:30pm, and usually looked something like this:
Time
Activity
9:00-11:00
Lectures, usually 1 technical + 1 conceptual
11:00-11:30
Break
11:30-13:00
Work on Jupyter Notebooks in pairs or alone, applying the lecture content
12:00-14:00
Lunch+Break
14:00-15:00
Lecture, technical or conceptual
15:00-16:30
Workshop applying the lecture contents, doing our own reading/research
16:30-17:00
Break
17:00-19:30
Discussion about certain AI safety topics, Q&A with TAs, events, feedback on the day
19:30-20:30
Dinner
Although attendance for each session was voluntary, nearly everyone chose to participate in all camp sessions. We covered a wide range of topics. On the technical side: Gradient descent/SGD, Transformers, adversarial Attacks, RL basics, RLHF, Evals and Mechanistic Interpretability. On the conceptual side we looked at: Timelines and what they mean, threat models, risks from AI systems, proposed solutions for alignment and AI governance. There was also a longer literature review session of our choice, and the last 1.5 days were focused on a project we chose. My impression was that the idea of the bootcamp is to expose you to a wide range of subfields in AI safety, so that you continue researching or working on the fields that you find interesting, rather than making you an expert in any of these things. For example, If you are already set on doing e.g. mechanistic interpretability, the bootcamp will see you spending 95% of your time on other topics, and might not be the best use of your time.
Additionally, a key emphasis of ML4G is on affecting peoples’ lives after the program. So we spent some time formulating our goals for the camp, in 1-on-1s for career advice/discussing career goals, and committing to certain actions after the camp (like the writing group that this post was created in), etc.
Things I liked
The camp is well-organized, the TAs are amazing, knowledgeable and motivated, and always looking to improve the way the camp is run or their teaching.
The participants: we had a super fun vibe and people from many different backgrounds, which I found great and led to interesting discussions.
The EA Hotel - while it is not exactly luxurious, it does actually have a lot of equipment and amenities, a small gym you can do most exercises in, several instruments, a variety of games, interesting books, some workspaces, pretty good vegan food, snacks, etc.
Being exposed to and practicing (much more important!) some new ways of thinking. One of the existing reports calls out Murphyjitsu and Hamming questions; I really enjoyed reasoning from first principles, or in a discussion clearly calling out different cruxes, "half-assing it with everything you got" etc.
Things I would change
If I had a magic wand, I would not run this camp in Blackpool, but it is where the EA Hotel is.
I assume some technical sessions were too technical for some people. It might be better to offer two lectures at the same time, one for people with more background in a topic and one that's more basic?
Have some check-in mechanism on the prerequisites, to make it easier to do them. Also tweak the prerequisites a bit (move some RL stuff in, a bit more practical pytorch/einops stuff, less theory)
My personal experience
Coming into the camp, I wanted to connect with more people interested in AI safety, learn a few technical things in a group setting, e.g. gain a better understanding of transformers and RLHF, and find a suitable area of AI safety for me to work in. I'd say these were all fulfilled: I learned about a few different orgs I hadn't heard of before, got a good broad overview of the field, and was able to get some advice on my career plans.
One of my favorite aspects was the community of the cohort; we had lots of self-organized activities in our free time (such as swimming in the sea - rather cold), people playing music in the evening, or sitting together playing games, etc. This also extended to the learning: people would pair up to work through the notebooks, explain concepts to each other, or help out other participants if they lacked the background for a certain topic.
Overall, ML4G is suitable and a great experience if you're anywhere from completely new to "don't exactly know what I want to focus on" in AI safety. The camp is probably not right for you if you want to significantly increase your technical mastery in a specific domain of AI safety. However, even if you already have a specific area you want to work in or learn more about, I recommend the camp to build a more well-rounded picture of AI safety and ensure that your future work is impactful in your assessment.
Shoutouts
Many thanks to Lovkush A, Atlanta N and Mick Z for proofreading and many helpful comments.
Introduction
This is a short summary of my experience attending the ML4Good UK bootcamp in September 2024. There are 2 previous experience reports I link to at the bottom, but because the program is refined each time, I wanted to describe my experience and add my two cents. This is useful for you if you are contemplating applying for the camp, or if you want to learn about AI Safety field building efforts. For context, I studied computer science, have been working as a software engineer for a few years and have had a hobby interest in AI safety for about 2 years (e.g., I did the BlueDot impact AI safety fundamentals course).
Overview of the program
The bootcamp is free (including room and board) , and happens over 10 days at CEEALAR in the UK. We had participants from all over Europe from multiple backgrounds, with most people about to finish or just having finished their degrees. Majors skewed towards computer science/maths-y degrees, but there were plenty of exceptions and any background is welcome. Compared to previous iterations, the program density was somewhat reduced. Our courses ran from 9am-7:30pm, and usually looked something like this:
Although attendance for each session was voluntary, nearly everyone chose to participate in all camp sessions. We covered a wide range of topics. On the technical side: Gradient descent/SGD, Transformers, adversarial Attacks, RL basics, RLHF, Evals and Mechanistic Interpretability. On the conceptual side we looked at: Timelines and what they mean, threat models, risks from AI systems, proposed solutions for alignment and AI governance. There was also a longer literature review session of our choice, and the last 1.5 days were focused on a project we chose. My impression was that the idea of the bootcamp is to expose you to a wide range of subfields in AI safety, so that you continue researching or working on the fields that you find interesting, rather than making you an expert in any of these things. For example, If you are already set on doing e.g. mechanistic interpretability, the bootcamp will see you spending 95% of your time on other topics, and might not be the best use of your time.
Additionally, a key emphasis of ML4G is on affecting peoples’ lives after the program. So we spent some time formulating our goals for the camp, in 1-on-1s for career advice/discussing career goals, and committing to certain actions after the camp (like the writing group that this post was created in), etc.
Things I liked
Things I would change
My personal experience
Coming into the camp, I wanted to connect with more people interested in AI safety, learn a few technical things in a group setting, e.g. gain a better understanding of transformers and RLHF, and find a suitable area of AI safety for me to work in. I'd say these were all fulfilled: I learned about a few different orgs I hadn't heard of before, got a good broad overview of the field, and was able to get some advice on my career plans.
One of my favorite aspects was the community of the cohort; we had lots of self-organized activities in our free time (such as swimming in the sea - rather cold), people playing music in the evening, or sitting together playing games, etc. This also extended to the learning: people would pair up to work through the notebooks, explain concepts to each other, or help out other participants if they lacked the background for a certain topic.
Overall, ML4G is suitable and a great experience if you're anywhere from completely new to "don't exactly know what I want to focus on" in AI safety. The camp is probably not right for you if you want to significantly increase your technical mastery in a specific domain of AI safety. However, even if you already have a specific area you want to work in or learn more about, I recommend the camp to build a more well-rounded picture of AI safety and ensure that your future work is impactful in your assessment.
Shoutouts
Many thanks to Lovkush A, Atlanta N and Mick Z for proofreading and many helpful comments.
Previous experience reports:
Report 1
Report 2