Ryan Kidd

Give me feedback! :)

Wiki Contributions

Comments

Sorted by
Ryan Kidd281

Why does the AI safety community need help founding projects?

  1. AI safety should scale
    1. Labs need external auditors for the AI control plan to work
    2. We should pursue many research bets in case superalignment/control fails
    3. Talent leaves MATS/ARENA and sometimes struggles to find meaningful work for mundane reasons, not for lack of talent or ideas
    4. Some emerging research agendas don’t have a home
    5. There are diminishing returns at scale for current AI safety teams; sometimes founding new projects is better than joining an existing team
    6. Scaling lab alignment teams are bottlenecked by management capacity, so their talent cut-off is above the level required to do “useful AIS work”
  2. Research organizations (inc. nonprofits) are often more effective than independent researchers
    1. Block funding model” is more efficient, as researchers can spend more time researching, rather than seeking grants, managing, or other traditional PI duties that can be outsourced
    2. Open source/collective projects often need a central rallying point (e.g., EleutherAI, dev interp at Timaeus, selection theorems and cyborgism agendas seem too delocalized, etc.)
  3. There is (imminently) a market for for-profit AI safety companies and value-aligned people should capture this free energy or let worse alternatives flourish
    1. If labs or API users are made legally liable for their products, they will seek out external red-teaming/auditing consultants to prove they “made a reasonable attempt” to mitigate harms
    2. If government regulations require labs to seek external auditing, there will be a market for many types of companies
    3. “Ethical AI” companies might seek out interpretability or bias/fairness consultants
  4. New AI safety organizations struggle to get funding and co-founders despite having good ideas
    1. AIS researchers are usually not experienced entrepeneurs (e.g., don’t know how to write grant proposals for EA funders, pitch decks for VCs, manage/hire new team members, etc.)
    2. There are not many competent start-up founders in the EA/AIS community and when they join, they don’t know what is most impactful to help
    3. Creating a centralized resource for entrepeneurial education/consulting and co-founder pairing would solve these problems
Ryan Kidd502

I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:

Ryan Kidd*5-12

How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.

  • Ryan Greenblatt seems to think we can get a 30x speed-up in AI R&D using near-term, plausibly safe AI systems; assume every AIS researcher can be 30x’d by Alignment MVPs
  • Tom Davidson thinks we have <3 years from 20%-AI to 100%-AI; assume we have ~3 years to align AGI with the aid of Alignment MVPs
  • Assume the hardness of aligning TAI is equivalent to the Apollo Program (90k engineer/scientist FTEs x 9 years = 810k FTE-years); therefore, we need ~9k more AIS technical researchers
  • The technical AIS field is currently ~500 people; at the current growth rate of 28% per year, it will take 12 years to grow to 9k people (Oct 2036)
  • Alternatively, if we bound by the Manhattan Project (25k FTEs x 5 years = 125 FTE-years), this will take 6.5 years (Jul 2031)
  • Metaculus predicts weak AGI in 2026 and strong AGI in 2030; clearly, more talent development is needed if we want to make the Nov 2030 AGI deadline!
  • If we want to make the 9k researchers goal by Nov 2030 AGI deadline, we need an annual growth rate of 65%, 2.3x the current growth rate of 28%
Ryan Kidd150

Crucial questions for AI safety field-builders:

  • What is the most important problem in your field? If you aren't working on it, why?
  • Where is everyone else dropping the ball and why?
  • Are you addressing a gap in the talent pipeline?
  • What resources are abundant? What resources are scarce? How can you turn abundant resources into scarce resources?
  • How will you know you are succeeding? How will you know you are failing?
  • What is the "user experience" of my program?
  • Who would you copy if you could copy anyone? How could you do this?
  • Am I better than the counterfactual?
  • Who are your clients? What do they want?
Ryan Kidd*20

And 115 prospective mentors applied for Summer 2025!

When onboarding advisors, we made it clear that we would not reveal their identities without their consent. I certainly don't want to require that our advisors make their identities public, as I believe this might compromise the intent of anonymous peer review: to obtain genuine assessment, without fear of bias or reprisals. As with most academic journals, the integrity of the process is dependent on the editors; in this case, the MATS team and our primary funders.

It's possible that a mere list of advisor names (without associated ratings) would be sufficient to ensure public trust in our process without compromising the peer review process. We plan to explore this option with our advisors in future.

Not currently. We thought that we would elicit more honest ratings of prospective mentors from advisors, without fear of public pressure or backlash, if we kept the list of advisors internal to our team, similar to anonymous peer review.

Ryan Kidd279

I'm tempted to set this up with Manifund money. Could be a weekend project.

Load More