We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project....
Center on Long-Term Risk (CLR) is accepting applications to the next iteration of our introductory fellowship. The CLR Fundamentals Program will introduce people to CLR’s research on s-risk reduction. Apply by Monday January 19th 23:59 GMT. What topics does the CLR Fundamentals Program discuss? The Fundamentals Program is designed to...
This is a brief overview of the Center on Long-Term Risk (CLR)’s activities in 2025 and our plans for 2026. We are hoping to fundraise $400,000 to fulfill our target budget in 2026. About us CLR works on addressing the worst-case risks from the development and deployment of advanced AI...
Summary: CLR is hiring for our Summer Research Fellowship. Join us for eight weeks to work on s-risk motivated empirical AI safety research. Apply here by Tuesday 15th April 23:59 PT. We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in...
Summary I present an extension to my optimal timing of spending on AGI safety model for calculating the value of information of AGI timelines via informing one’s spending schedule. I show, using my best guess of the model parameters, that for an AI risk funder uncertain between a ‘short timelines’...
Summary When should funders wanting to increase the probability of AGI going well spend their money? We have created a tool to calculate the optimal spending schedule and tentatively conclude that funders collectively should be spending at least 5% of their capital each year on AI risk interventions and in...