All of Ryan Kidd's Comments + Replies

I'm open to this argument, but I'm not sure it's true under the Trump administration.

2Orpheus16
My understanding is that AGI policy is pretty wide open under Trump. I don't think he and most of his close advisors have entrenched views on the topic. If AGI is developed in this Admin (or we approach it in this Admin), I suspect there is a lot of EV on the table for folks who are able to explain core concepts/threat models/arguments to Trump administration officials. There are some promising signs of this so far. Publicly, Vance has engaged with AI2027. Non-publicly, I think there is a lot more engagement/curiosity than many readers might expect. This isn't to say "everything is great and the USG is super on track to figure out AGI policy" but it's more to say "I think people should keep an open mind– even people who disagree with the Trump Admin on mainstream topics should remember that AGI policy is a weird/niche/new topic where lots of people do not have strong/entrenched/static positions (and even those who do have a position may change their mind as new events unfold.)"
Ryan Kidd1710

Technical AI alignment/control is still impactful; don't go all-in on AI gov!

  • Liability incentivizes safeguards, even absent regulation;
  • Cheaper, more effective safeguards make it easier for labs to meet safety standards;
  • Concrete, achievable safeguards give regulation teeth.
0Katalina Hernandez
This is exactly the message we need more people to hear. What’s missing from most conversations is this: Frontier liability will cause massive legal bottlenecks soon, regulations are nowhere near ready (not even in the EU with the AI Act). Law firms and courts will need technical safety experts.  Not just to inform regulation, but to provide expert opinions when opaque model behaviors cause harm downstream, often in ways that weren’t detectable during testing. The legal world will be forced to allocate responsibility in the face of emergent, stochastic failure modes. Without technical guidance, there are no safeguards to enforce, and no one to translate model failures into legal reasoning.
9Orpheus16
There are definitely still benefits to doing alignment research, but this only justifies the idea that doing alignment research is better than doing nothing. IMO the thing that matters (for an individual making decisions about what to do with their career) is something more like "on the margin, would it be better to have one additional person do AI governance or alignment/control?" I happen to think that given the current allocation of talent, on-the-margin it's generally better for people to choose AI policy. (Particularly efforts to contribute technical expertise or technical understanding/awareness to governments, think-tanks interfacing with governments, etc.) There is a lot of demand in the policy community for these skills/perspectives and few people who can provide them. In contrast, technical expertise is much more common at the major AI companies (though perhaps some specific technical skills or perspectives on alignment are neglected.) In other words, my stance is something like "by default, anon technical person would have more expected impact in AI policy unless they seem like an unusually good fit for alignment or an unusually bad fit for policy."
1Felix C.
What are your thoughts on the relative value of AI governance/advocacy vs. technical research? It seems to me that many of the technical problems are essentially downstream of politics; that intent alignment could be solved, if only our race dynamics were mitigated, regulation was used to slow capabilities research, and if it was given funding/strategic priority. 

Also there's a good chance AI gov won't work, and labs will just have a very limited safety budget to implement their best guess mitigations. Or maybe AI gov does work and we get a large budget, we still need to actually solve alignment. 

Ryan Kidd100

Not sure this is interesting to anyone, but I compiled Zillow's data on 2021-2025 Berkeley average rent prices recently, to help with rent negotiation. I did not adjust for inflation; these are the raw averages at each time.

I definitely think that people should not look at my estimates and say "here is a good 95% confidence interval upper bound of the number of employees in the AI safety ecosystem." I think people should look at my estimates and say "here is a good 95% confidence interval lower bound of the number of employees in the AI safety ecosystem," because you can just add up the names. I.e., even if there might be 10x the number of employees as I estimated, I'm at least 95% confident that there are more than my estimate obtained by just counting names (obviously excluding the 10% fudge factor).

So, conduct a sensitivity analysis on the definite integral with respect to choices of integration bounds? I'm not sure this level of analysis is merited given the incomplete data and unreliable estimation methodology for the number of independent researchers. Like, I'm not even confident that the underlying distribution is a power law (instead of, say, a composite of power law and lognormal distributions, or a truncated power law), and the value of  seems very sensitive to data in the vicinity, so I wouldn't want to rely on this estimate exc... (read more)

2Garrett Baker
I think again we are on the same page & this sounds reasonable, I just want to argue that "lower bound" and "upper bound" are less-than-informative descriptions of the uncertainty in the estimates.

By "upper bound", I meant "upper bound  on the definite integral ". I.e., for the kind of hacky thing I'm doing here, the integral is very sensitive to the choice of bounds . For example, the integral does not converge for . I think all my data here should be treated as incomplete and all my calculations crude estimates at best.

I edited the original comment to say " might be a bad upper bound" for clarity.

2Garrett Baker
Yeah, I think we're in agreement, I'm just saying the phrase "upper bound" is not useful compared to eg providing various estimates for various bounds & making a table or graph, and a derivative of the results wrt the parameter estimate you inferred.
Ryan Kidd*40

It's also worth noting that almost all of these roles are management, ML research, or software engineering; there are very few operations, communications, non-ML research, etc. roles listed, implying that these roles are paid significantly less.

Ryan Kidd*90

Apparently the headcount for US corporations follows a power-law distribution, apart from mid-sized corporations, which fit a lognormal distribution better. I fit a power law distribution to the data (after truncating all datapoints with over 40 employees, which created a worse fit), which gave . This seems to imply that there are ~400 independent AI safety researchers (though note that  is probability density function and this estimate might be way off); Claude estimates 400-600 for comparison. Integrating this distributio... (read more)

2Garrett Baker
I've seen you use "upper bound" and "underestimate" twice in this thread, and I want to lightly push back against this phrasing. I think the likely level of variance in these estimates is much too high for them to be any sort of bounds or to informatively be described as "under" any estimate. That is, although your analysis of the data you have may imply, given perfect data, an underestiamte or upper bound, I think your data is too imperfect for those to be useful descriptions with respect to reality. Garbage in, garbage out, as they say. The numbers are still useful, and if you have a model of the orgs that you missed, the magnitude of your over-counting, or a sensitivity analysis of your process, these would be interesting to hear about. But these will probably take the form more as approximate probability distributions more than a binary bigger-or-smaller-than-truth shots from the hip.

I decided to exclude OpenAI's nonprofit salaries as I didn't think they counted as an "AI safety nonprofit" and their highest paid current employees are definitely employed by the LLC. I decided to include Open Philanthropy's nonprofit employees, despite the fact that their most highly compensated employees are likely those under the Open Philanthropy LLC.

Ryan Kidd*473

As part of MATS' compensation reevaluation project, I scraped the publicly declared employee compensations from ProPublica's Nonprofit Explorer for many AI safety and EA organizations (data here) in 2019-2023. US nonprofits are required to disclose compensation information for certain highly paid employees and contractors on their annual Form 990 tax return, which becomes publicly available. This includes compensation for officers, directors, trustees, key employees, and highest compensated employees earning over $100k annually. Therefore, my data does not... (read more)

4Ryan Kidd
It's also worth noting that almost all of these roles are management, ML research, or software engineering; there are very few operations, communications, non-ML research, etc. roles listed, implying that these roles are paid significantly less.
7Ryan Kidd
I decided to exclude OpenAI's nonprofit salaries as I didn't think they counted as an "AI safety nonprofit" and their highest paid current employees are definitely employed by the LLC. I decided to include Open Philanthropy's nonprofit employees, despite the fact that their most highly compensated employees are likely those under the Open Philanthropy LLC.

Thanks! I wasn't sure whether to include Simplex, or the entire Obelisk team at Astera (which Simplex is now part of), or just exclude these non-scaling lab hybrid orgs from the count (Astera does neuroscience too).

I counted total employees for most orgs. In the spreadsheet I linked, I didn't include an estimate for total GDM headcount, just that of the AI Safety and Alignment Team.

Most of the staff at AE Studio are not working on alignment, so I don't think it counts.

2Nathan Helm-Burger
Yeah, I think it's like maybe 5 people's worth of contribution from AE studio. Which is something, but not at all comparable to the whole company being full-time on alignment.

I was very inclusive. I looked at a range of org lists, including those maintained by 80,000 Hours and AISafety.com.

Ryan Kidd*74

Assuming that I missed 10% of orgs, this gives a rough estimate for the total number of FTEs working on AI safety or adjacent work at ~1000, not including students or faculty members. This is likely an underestimate, as there are a lot of AI safety-adjacent orgs in areas like AI security and robustness.

Ryan Kidd*223

I did a quick inventory on the employee headcount at AI safety and safety-adjacent organizations. The median AI safety org has 10 8 employees. I didn't include UK AISI, US AISI, CAIS, and the safety teams at Anthropic, GDM, OpenAI, and probably more, as I couldn't get accurate headcount estimates. I also didn't include "research affiliates" or university students in the headcounts for academic labs. Data here. Let me know if I missed any orgs!

8Nathan Helm-Burger
A couple thousand brave souls standing between humanity and utter destruction. Kinda epic from a storytelling narrative where the plucky underdog good guys win in the end.... but kinda depressing from a realistic forecast perspective. Can humanity really not muster more researchers to throw at this problem? What an absurd coordination failure.
9Ryan Kidd
Apparently the headcount for US corporations follows a power-law distribution, apart from mid-sized corporations, which fit a lognormal distribution better. I fit a power law distribution to the data (after truncating all datapoints with over 40 employees, which created a worse fit), which gave p(x)∼399x−1.29. This seems to imply that there are ~400 independent AI safety researchers (though note that p(x) is probability density function and this estimate might be way off); Claude estimates 400-600 for comparison. Integrating this distribution over x∈[1,∞) gives ~1400 (2 s.f.) total employees working on AI safety or safety-adjacent work (∞ might be a bad upper bound, as the largest orgs have <100 employees).
5Rauno Arike
Leap Labs, Conjecture, Simplex, Aligned AI, and the AI Futures Project seem to be missing from the current list.
4Gunnar_Zarncke
I presume you counted total employees and not only AI safety researchers, which would be the more interesting number, esp. at GDM.
3Mike Vaiana
You can add AE Studio to the list.
4Alexander Gietelink Oldenziel
How did you determine the list of AI safety and adjacent organizations?
7Ryan Kidd
Assuming that I missed 10% of orgs, this gives a rough estimate for the total number of FTEs working on AI safety or adjacent work at ~1000, not including students or faculty members. This is likely an underestimate, as there are a lot of AI safety-adjacent orgs in areas like AI security and robustness.

I expect mech interp to be particularly easy to automate at scale. If mech interp has capabilities externalities (e.g., uncovering useful learned algorithms or "retargeting the search"), this could facilitate rapid performance improvements.

Ryan Kidd134

It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space. If small experiments are generally predictive of larger experiments (and they seem to be, a la scaling laws) and model inference costs are cheap enough, this parallelized approach might be be 1) computationally affordable and 2) successful at overcoming the architecture bottleneck.

6Garrett Baker
My vague understanding is this is kinda what capabilities progress ends up looking like in big labs. Lots of very small experiments playing around with various parameters people with a track-record of good heuristics in this space feel should be played around with. Then a slow scale up to bigger and bigger models and then you combine everything together & "push to main" on the next big model run. I'd also guess that the bottleneck isn't so much on the number of people playing around with the parameters, but much more on good heuristics regarding which parameters to play around with.
5Ryan Kidd
I expect mech interp to be particularly easy to automate at scale. If mech interp has capabilities externalities (e.g., uncovering useful learned algorithms or "retargeting the search"), this could facilitate rapid performance improvements.

Apr 18, 11:59 pm PT :)

Hi! Yes, MATS is always open to newbies, though our bar has raised significantly over the last few years. AISF is great, but I would also recommend completing ARENA or ML4Good courses if you are pursuing a technical project, or completing an AI gov research project.

1matchaa
Thanks Ryan! I will look into suggested courses. 

It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space.

LISA's current leadership team consists of an Operations Director (Mike Brozowski) and a Research Director (James Fox). LISA is hiring for a new CEO role; there has never been a LISA CEO.

Ryan Kidd*5-12

How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.

  • Ryan Greenblatt seems to think we can get a 30x speed-up in AI R&D using near-term, plausibly safe AI systems; assume every AIS researcher can be 30x’d by Alignment MVPs
  • Tom Davidson thinks we have <3 years from 20%-AI to 100%-AI; assume we have ~3 years to align AGI with the aid of Alignment MVPs
  • Assume the hardness of aligning TAI is equivalent to the Apollo Program (90k engineer/scientist FTEs x 9 years = 810k FTE-years); therefore, we need ~
... (read more)
Buck128

I appreciate the spirit of this type of calculation, but think that it's a bit too wacky to be that informative. I think that it's a bit of a stretch to string these numbers together. E.g. I think Ryan and Tom's predictions are inconsistent, and I think that it's weird to identify 100%-AI as the point where we need to have "solved the alignment problem", and I think that it's weird to use the Apollo/Manhattan program as an estimate of work required. (I also don't know what your Manhattan project numbers mean: I thought there were more like 2.5k scientists/engineers at Los Alamos, and most of the people elsewhere were purifying nuclear material)

5Garrett Baker
There's the standard software engineer response of "You cannot make a baby in 1 month with 9 pregnant women". If you don't have a term in this calculation for the amount of research hours that must be done serially vs the amount of research hours that can be done in parallel, then it will always seem like we have too few people, and should invest vastly more in growth growth growth! If you find that actually your constraint is serial research output, then you still may conclude you need a lot of people, but you will sacrifice a reasonable amount of growth speed for attracting better serial researchers.  (Possibly this shakes out to mathematicians and physicists, but I don't want to bring that conversation into here)
Ryan Kidd*150

Crucial questions for AI safety field-builders:

  • What is the most important problem in your field? If you aren't working on it, why?
  • Where is everyone else dropping the ball and why?
  • Are you addressing a gap in the talent pipeline?
  • What resources are abundant? What resources are scarce? How can you turn abundant resources into scarce resources?
  • How will you know you are succeeding? How will you know you are failing?
  • What is the "user experience" of my program?
  • Who would you copy if you could copy anyone? How could you do this?
  • Am I better than the counterfactual?
  • Who are your clients? What do they want?
1Archimedes
Did you mean to link to my specific comment for the first link?
Ryan Kidd*30

And 115 prospective mentors applied for Summer 2025!

1Sheikh Abdur Raheem Ali
I count 55 mentors at https://www.matsprogram.org/mentors, implying a mentor acceptance rate of 47%.

When onboarding advisors, we made it clear that we would not reveal their identities without their consent. I certainly don't want to require that our advisors make their identities public, as I believe this might compromise the intent of anonymous peer review: to obtain genuine assessment, without fear of bias or reprisals. As with most academic journals, the integrity of the process is dependent on the editors; in this case, the MATS team and our primary funders.

It's possible that a mere list of advisor names (without associated ratings) would be sufficient to ensure public trust in our process without compromising the peer review process. We plan to explore this option with our advisors in future.

habryka153

Yeah, it's definitely a kind of messy tradeoff. My sense is just that the aggregate statistics you provided didn't have that many bits of evidence that would allow me to independently audit a trust chain.

A thing that I do think might be more feasible is to make it opt-in for advisors to be public. E.g. SFF only had a minority of recommenders be public about their identity, but I do still think it helps a good amount to have some names.

(Also, just for historical consistency: Most peer review in the history of science was not anonymous. Anonymous peer review... (read more)

Not currently. We thought that we would elicit more honest ratings of prospective mentors from advisors, without fear of public pressure or backlash, if we kept the list of advisors internal to our team, similar to anonymous peer review.

4Austin Chen
Makes sense, thanks. FWIW, I really appreciated that y'all posted this writeup about mentor selection -- choosing folks for impactful, visible, prestigious positions is a whole can of worms, and I'm glad to have more public posts explaining your process & reasoning.
Ryan Kidd279

I'm tempted to set this up with Manifund money. Could be a weekend project.

2Marius Hobbhahn
Go for it. I have some names in mind for potential experts. DM if you're interested. 
Ryan KiddΩ6120

How would you operationalize a contest for short-timeline plans?

Marius HobbhahnΩ92111

Something like the OpenPhil AI worldview contest: https://www.openphilanthropy.org/research/announcing-the-winners-of-the-2023-open-philanthropy-ai-worldviews-contest/
Or the ARC ELK prize: https://www.alignment.org/blog/prizes-for-elk-proposals/

In general, I wouldn't make it too complicated and accept some arbitrariness. There is a predetermined panel of e.g. 5 experts and e.g. 3 categories (feasibility, effectiveness, everything else). All submissions first get scored by 2 experts with a shallow judgment (e.g., 5-10 minutes). Maybe there is some "saving" ... (read more)

But who is "MIRI"? Most of the old guard have left. Do you mean Eliezer and Nate? Or a consensus vote of the entire staff (now mostly tech gov researchers and comms staff)?

Ryan Kidd374

On my understanding, EA student clubs at colleges/universities have been the main “top of funnel” for pulling people into alignment work during the past few years. The mix people going into those clubs is disproportionately STEM-focused undergrads, and looks pretty typical for STEM-focused undergrads. We’re talking about pretty standard STEM majors from pretty standard schools, neither the very high end nor the very low end of the skill spectrum.

At least from the MATS perspective, this seems quite wrong. Only ~20% of MATS scholars in the last ~4 program... (read more)

Ryan Kidd1711

You could consider doing MATS as "I don't know what to do, so I'll try my hand at something a decent number of apparent experts consider worthwhile and meanwhile bootstrap a deep understanding of this subfield and a shallow understanding of a dozen other subfields pursued by my peers." This seems like a common MATS experience and I think this is a good thing.

Ryan Kidd110

Some caveats:

  • A crucial part of the "hodge-podge alignment feedback loop" is "propose new candidate solutions, often grounded in theoretical models." I don't want to entirely focus on empirically fleshing out existing research directions to the exclusion of proposing new candidate directions. However, it seems that, often, new on-paradigm research directions emerge in the process of iterating on old ones!
  • "Playing theoretical builder-breaker" is an important skill and I think this should be taught more widely. "Iterators," as I conceive of them, are capable
... (read more)
Ryan Kidd*4613

Alice is excited about the eliciting latent knowledge (ELK) doc, and spends a few months working on it. Bob is excited about debate, and spends a few months working on it. At the end of those few months, Alice has a much better understanding of how and why ELK is hard, has correctly realized that she has no traction on it at all, and pivots to working on technical governance. Bob, meanwhile, has some toy but tangible outputs, and feels like he's making progress.

 

I don't want to respond to the examples rather than the underlying argument, but it seems ... (read more)

4Noosphere89
Yeah, the worst-case ELK problem could well have no solution, but in practice alignment is solvable either by other methods or by having an ELK solution that does work on a large classes of AIs like neural nets, so Alice is plausibly making a big mistake, and a crux here is that I don't believe we will ever get no-go theorems, or even arguments to the standard level of rigor in physics because I believe alignment has pretty lax constraints, so a lot of solutions can appear. The relevant sentence below:
Ryan Kidd110

Some caveats:

  • A crucial part of the "hodge-podge alignment feedback loop" is "propose new candidate solutions, often grounded in theoretical models." I don't want to entirely focus on empirically fleshing out existing research directions to the exclusion of proposing new candidate directions. However, it seems that, often, new on-paradigm research directions emerge in the process of iterating on old ones!
  • "Playing theoretical builder-breaker" is an important skill and I think this should be taught more widely. "Iterators," as I conceive of them, are capable
... (read more)
Ryan Kidd268

Obviously I disagree with Tsvi regarding the value of MATS to the proto-alignment researcher; I think being exposed to high quality mentorship and peer-sourced red-teaming of your research ideas is incredibly valuable for emerging researchers. However, he makes a good point: ideally, scholars shouldn't feel pushed to write highly competitive LTFF grant applications so soon into their research careers; there should be longer-term unconditional funding opportunities. I would love to unlock this so that a subset of scholars can explore diverse research directions for 1-2 years without 6-month grant timelines looming over them. Currently cooking something in this space.

Reply12211

@yanni kyriacos when will you post about TARA and Sydney AI Safety Hub on LW? ;)

1yanni kyriacos
SASH isn't official (we're waiting on funding). Here is TARA :) https://www.lesswrong.com/posts/tyGxgvvBbrvcrHPJH/apply-to-be-a-ta-for-tara
1Towards_Keeperhood
(haha cool. perhaps you could even PM Abram if he doesn't PM you. I think it would be pretty useful to speed up his agenda through this.)

Can you make a Manifund.org grant application if you need funding?

We don't collect GRE/SAT scores, but we do have CodeSignal scores and (for the first time) a general aptitude test developed in collaboration with SparkWave. Many MATS applicants have maxed out scores for the CodeSignal and general aptitude tests. We might share these stats later.

1Daniel Tan
FWIW from what I remember, I would be surprised if most people doing MATS 7.0 did not max out the aptitude test. Also, the aptitude test seems more like an SAT than anything measuring important procedural knowledge for AI safety. 

I don't agree with the following claims (which might misrepresent you):

  • "Skill levels" are domain agnostic.
  • Frontier oversight, control, evals, and non-"science of DL" interp research is strictly easier in practice than frontier agent foundations and "science of DL" interp research.
  • The main reason there is more funding/interest in the former category than the latter is due to skill issues, rather than worldview differences and clarity of scope.
  • MATS has mid researchers relative to other programs.
9johnswentworth
Y'know, you probably have the data to do a quick-and-dirty check here. Take a look at the GRE/SAT scores on the applications (both for applicant pool and for accepted scholars). If most scholars have much-less-than-perfect scores, then you're probably not hiring the top tier (standardized tests have a notoriously low ceiling). And assuming most scholars aren't hitting the test ceiling, you can also test the hypothesis about different domains by looking at the test score distributions for scholars in the different areas.
Ryan Kidd2-1

I don't think it makes sense to compare Google intern salary with AIS program stipends this way, as AIS programs are nonprofits (with associated salary cut) and generally trying to select against people motivated principally by money. It seems like good mechanism design to pay less than tech internships, even if the technical bar for is higher, given that value alignment is best selected by looking for "costly signals" like salary sacrifice.

I don't think the correlation for competence among AIS programs is as you describe.

I think there some confounders here:

  • PIBBSS had 12 fellows last cohort and MATS had 90 scholars. The mean/median MATS Summer 2024 scholar was 27; I'm not sure what this was for PIBBSS. The median age of the 12 oldest MATS scholars was 35 (mean 36). If we were selecting for age (which is silly/illegal, of course) and had a smaller program, I would bet that MATS would be older than PIBBSS on average. MATS also had 12 scholars with completed PhDs and 11 in-progress.
  • Several PIBBSS fellows/affiliates have done MATS (e.g., Ann-Kathrin Dombrowski, Magdalena Wache,
... (read more)
3johnswentworth
I think this is less a matter of my particular taste, and more a matter of selection pressures producing genuinely different skill levels between different research areas. People notoriously focus on oversight/control/evals/specific interp over foundations/generalizable interp because the former are easier. So when one talks to people in those different areas, there's a very noticeable tendency for the foundations/generalizable interp people to be noticeably smarter, more experienced, and/or more competent. And in the other direction, stronger people tend to be more often drawn to the more challenging problems of foundations or generalizable interp. So possibly a MATS apologist reply would be: yeah, the MATS portfolio is more loaded on the sort of work that's accessible to relatively-mid researchers, so naturally MATS ends up with more relatively-mid researchers. Which is not necessarily a bad thing.

Are these PIBBSS fellows (MATS scholar analog) or PIBBSS affiliates (MATS mentor analog)?

2johnswentworth
Fellows.

Updated figure with LASR Labs and Pivotal Research Fellowship at current exchange rate of 1 GBP = 1.292 USD.

That seems like a reasonable stipend for LASR. I don't think they cover housing, however.

That said, maybe you are conceptualizing of an "efficient market" that principally values impact, in which case I would expect the governance/policy programs to have higher stipends. However, I'll note that 87% of MATS alumni are interested in working at an AISI and several are currently working at UK AISI, so it seems that MATS is doing a good job of recruiting technical governance talent that is happy to work for government wages.

1johnswentworth
No, I meant that the correlation between pay and how-competent-the-typical-participant-seems-to-me is, if anything, negative. Like, the hiring bar for Google interns is lower than any of the technical programs, and PIBBSS seems-to-me to have the most competent participants overall (though I'm not familiar with some of the programs).

Note that governance/policy jobs pay less than ML research/engineering jobs, so I expect GovAI, IAPS, and ERA, which are more governance focused, to have a lower stipend. Also, MATS is deliberately trying to attract top CS PhD students, so our stipend should be higher than theirs, although lower than Google internships to select for value alignment. I suspect that PIBBSS' stipend is an outlier and artificially low due to low funding. Given that PIBBSS has a mixture of ML and policy projects, and IMO is generally pursuing higher variance research than MATS, I suspect their optimal stipend would be lower than MATS', but higher than a Stanford PhD's; perhaps around IAPS' rate.

2Ryan Kidd
That said, maybe you are conceptualizing of an "efficient market" that principally values impact, in which case I would expect the governance/policy programs to have higher stipends. However, I'll note that 87% of MATS alumni are interested in working at an AISI and several are currently working at UK AISI, so it seems that MATS is doing a good job of recruiting technical governance talent that is happy to work for government wages.
Load More