LASR Labs Spring 2025 applications are open!
Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here. TLDR; apply by October 27th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London. Apply to be a participant here. We’re also looking for a programme manager, and you can read more about the role here. London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs) is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models. LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, OpenAI’s dangerous capabilities evals team, Leap Labs, and def/acc. Many more have continued working with their supervisors, doing independent research, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). All of the 2024 cohort’s groups have submitted papers to workshops or conferences. Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Bluedot Impact, ARENA, and the MATS extension programme. The office will host various guest sessions, talks, and networking events. Programme details: The programme will run from the 10th of February to the 9th of May (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will a