Ryan Kidd

Give me feedback! :)

Wiki Contributions

Comments

Sorted by
Ryan Kidd301

Why does the AI safety community need help founding projects?

  1. AI safety should scale
    1. Labs need external auditors for the AI control plan to work
    2. We should pursue many research bets in case superalignment/control fails
    3. Talent leaves MATS/ARENA and sometimes struggles to find meaningful work for mundane reasons, not for lack of talent or ideas
    4. Some emerging research agendas don’t have a home
    5. There are diminishing returns at scale for current AI safety teams; sometimes founding new projects is better than joining an existing team
    6. Scaling lab alignment teams are bottlenecked by management capacity, so their talent cut-off is above the level required to do “useful AIS work”
  2. Research organizations (inc. nonprofits) are often more effective than independent researchers
    1. Block funding model” is more efficient, as researchers can spend more time researching, rather than seeking grants, managing, or other traditional PI duties that can be outsourced
    2. Open source/collective projects often need a central rallying point (e.g., EleutherAI, dev interp at Timaeus, selection theorems and cyborgism agendas seem too delocalized, etc.)
  3. There is (imminently) a market for for-profit AI safety companies and value-aligned people should capture this free energy or let worse alternatives flourish
    1. If labs or API users are made legally liable for their products, they will seek out external red-teaming/auditing consultants to prove they “made a reasonable attempt” to mitigate harms
    2. If government regulations require labs to seek external auditing, there will be a market for many types of companies
    3. “Ethical AI” companies might seek out interpretability or bias/fairness consultants
  4. New AI safety organizations struggle to get funding and co-founders despite having good ideas
    1. AIS researchers are usually not experienced entrepeneurs (e.g., don’t know how to write grant proposals for EA funders, pitch decks for VCs, manage/hire new team members, etc.)
    2. There are not many competent start-up founders in the EA/AIS community and when they join, they don’t know what is most impactful to help
    3. Creating a centralized resource for entrepeneurial education/consulting and co-founder pairing would solve these problems
Ryan Kidd502

I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:

Can you make a Manifund.org grant application if you need funding?

We don't collect GRE/SAT scores, but we do have CodeSignal scores and (for the first time) a general aptitude test developed in collaboration with SparkWave. Many MATS applicants have maxed out scores for the CodeSignal and general aptitude tests. We might share these stats later.

I don't agree with the following claims (which might misrepresent you):

  • "Skill levels" are domain agnostic.
  • Frontier oversight, control, evals, and non-"science of DL" interp research is strictly easier in practice than frontier agent foundations and "science of DL" interp research.
  • The main reason there is more funding/interest in the former category than the latter is due to skill issues, rather than worldview differences and clarity of scope.
  • MATS has mid researchers relative to other programs.
Ryan Kidd2-1

I don't think it makes sense to compare Google intern salary with AIS program stipends this way, as AIS programs are nonprofits (with associated salary cut) and generally trying to select against people motivated principally by money. It seems like good mechanism design to pay less than tech internships, even if the technical bar for is higher, given that value alignment is best selected by looking for "costly signals" like salary sacrifice.

I don't think the correlation for competence among AIS programs is as you describe.

I think there some confounders here:

  • PIBBSS had 12 fellows last cohort and MATS had 90 scholars. The mean/median MATS Summer 2024 scholar was 27; I'm not sure what this was for PIBBSS. The median age of the 12 oldest MATS scholars was 35 (mean 36). If we were selecting for age (which is silly/illegal, of course) and had a smaller program, I would bet that MATS would be older than PIBBSS on average. MATS also had 12 scholars with completed PhDs and 11 in-progress.
  • Several PIBBSS fellows/affiliates have done MATS (e.g., Ann-Kathrin Dombrowski, Magdalena Wache, Brady Pelkey, Martín Soto).
  • I suspect that your estimation of "how smart do these people seem" might be somewhat contingent on research taste. Most MATS research projects are in prosaic AI safety fields like oversight & control, evals, and non-"science of DL" interpretability, while most PIBBSS research has been in "biology/physics-inspired" interpretability, agent foundations, and (recently) novel policy approaches (all of which MATS has supported historically).

Also, MATS is generally trying to further a different research porfolio than PIBBSS, as I discuss here, and has substantial success in accelerating hires to AI scaling lab safety teams and research nonprofits, helping scholars found impactful AI safety organizations, and (I suspect) accelerating AISI hires.

Are these PIBBSS fellows (MATS scholar analog) or PIBBSS affiliates (MATS mentor analog)?

Load More