Matt Putz

Wiki Contributions

Comments

Sorted by

Regarding our career development and transition funding (CDTF) program: 

  • The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don't reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
    • (When funding independent research through this program, we sometimes explicitly clarify that we're unlikely to renew by default).
  • Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.

(I probably won't have time to engage further.)

Just wanted to flag quickly that Open Philanthropy's GCR Capacity Building team (where I work) has a career development and transition funding program.

The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.

I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).

Thanks for the feedback! I’ll forward it to our team.

I think I basically agree with you that from reading the RFP page, this project doesn’t seem like a central example of the projects we’re describing (and indeed, many of the projects we do fund through this RFP are more like the examples given on the RFP page). 

Some quick reactions:

  • FWIW, our team generally makes a lot of grants that are <$100k (much more so than other Open Phil teams).
  • I agree the application would probably take most people longer than the description that Gavin gave on Manifund. That said, I think it’s still relatively lean considering the distribution of projects we fund, though I agree it’s slightly long for projects as small as this one (but I think Gavin could have filled it out in <<2 days). For reference, this is our form.
  • Regarding turnaround time, my guess is for this project, we would have taken significantly less than 3 months, especially if they had indicated that receiving a decision was time-sensitive. For reference, the form currently says: 

We expect to make most funding decisions in 3 months or less (assuming prompt responses to any follow-up questions we may have), and we may or may not be able to accommodate requests for greater time-sensitivity. Applicants asking for over $500K should expect a decision to take the full 3 months (or more, in particularly complex cases), and apply in advance accordingly. We’ll let you know as soon as we can if we anticipate a longer than 3-month decision timeline. [emphasis in original]

  • For $500k+ projects, I think a 3-month turnaround time is more defensible, though I do personally wish we generally had faster response times. 
Matt Putz100

I work at Open Philanthropy, and I recently let Gavin know that Open Phil is planning to recommend a grant of $5k to Arb for the second project on your list: Overview of AI Safety in 2024 (they had already raised ~$10k by the time we came across it). Thanks for writing this post Austin — it brought the funding opportunity to our attention.

Like other commenters on Manifund, I believe this kind of overview is a valuable reference for the field, especially for newcomers. 

I wanted to flag that this project would have been eligible for our RFP for work that builds capacity to address risks from transformative AI. I worry that not all potential applicants are aware of the RFP or its scope, so I’ll take this opportunity to mention that this RFP’s scope is quite broad, including funding for: 

  • Training and mentorship programs
  • Events
  • Groups
  • Resources, media, and communications
  • Almost any other type of project that builds capacity for advanced AI risks (in the sense of increasing the number of careers devoted to these problems, supporting people doing this work, and sharing knowledge related to this work). 

More details at the link above. People might also find this page helpful, which lists all currently open application programs at Open Phil. 

There are two very similar pages. This one and https://www.lesswrong.com/tag/scoring-rules/

By "refining pure human feedback", do you mean refining RLHF ML techniques? 

I assume you still view enhancing human feedback as valuable? And also more straightforwardly just increasing the quality of the best human feedback?

Amazing! Thanks so much for making this happen so quickly.

To anyone who's trying to figure out how to get it to work on Google Podcasts, here's what worked for me (searching the name didn't, maybe this will change?):

Go to the Libsyn link. Click the RSS symbol. Copy the link. Go to Google Podcasts. Click the Library tab (bottom right). Go to Subscriptions. Click symbol that looks like adding a link in the upper right. Paste link, confirm.

Hey Paul, thanks for taking the time to write that up, that's very helpful!

Hey Rohin, thanks a lot, that's genuinely super helpful. Drawing analogies to "normal science" seems both reasonable and like it clears the picture up a lot.

I would be interested to hear opinions about what fraction of people could possibly produce useful alignment work?

Ignoring the hurdle of "knowing about AI safety at all", i.e. assuming they took some time to engage with it (e.g. they took the AGI Safety Fundamentals course). Also assume they got some good mentorship (e.g. from one of you) and then decided to commit full-time (and got funding for that). The thing I'm trying to get at is more about having the mental horsepower + epistemics + creativity + whatever other qualities are useful, or likely being able to get there after some years of training.

Also note that I mean direct useful work, not indirect meta things like outreach or being a PA to a good alignment researcher etc. (these can be super important, but I think it's productive to think of them as a distinct class). E.g. I would include being a software engineer at Anthropic, but exclude doing grocery-shopping for your favorite alignment researcher.

An answer could look like "X% of the general population" or "half the people who could get a STEM degree at Ivy League schools if they tried" or "a tenth of the people who win the Fields medal".

I think it's useful to have a sense of this for many purposes, incl. questions about community growth and the value of outreach in different contexts, as well as priors about one's own ability to contribute. Hence, I think it's worth discussing honestly, even though it can obviously be controversial (with some possible answers implying that most current AI safety people are not being useful).

Load More