Matt Putz

Wiki Contributions

Comments

Sorted by

I've updated toward the views Daniel expresses here and I'm now about half way between Ajeya's views in this post and Daniel's (in geometric mean).

I'm curious what the biggest factors were that made you update?

Regarding our career development and transition funding (CDTF) program: 

  • The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don't reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
    • (When funding independent research through this program, we sometimes explicitly clarify that we're unlikely to renew by default).
  • Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.

(I probably won't have time to engage further.)

Just wanted to flag quickly that Open Philanthropy's GCR Capacity Building team (where I work) has a career development and transition funding program.

The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.

I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).

Thanks for the feedback! I’ll forward it to our team.

I think I basically agree with you that from reading the RFP page, this project doesn’t seem like a central example of the projects we’re describing (and indeed, many of the projects we do fund through this RFP are more like the examples given on the RFP page). 

Some quick reactions:

  • FWIW, our team generally makes a lot of grants that are <$100k (much more so than other Open Phil teams).
  • I agree the application would probably take most people longer than the description that Gavin gave on Manifund. That said, I think it’s still relatively lean considering the distribution of projects we fund, though I agree it’s slightly long for projects as small as this one (but I think Gavin could have filled it out in <<2 days). For reference, this is our form.
  • Regarding turnaround time, my guess is for this project, we would have taken significantly less than 3 months, especially if they had indicated that receiving a decision was time-sensitive. For reference, the form currently says: 

We expect to make most funding decisions in 3 months or less (assuming prompt responses to any follow-up questions we may have), and we may or may not be able to accommodate requests for greater time-sensitivity. Applicants asking for over $500K should expect a decision to take the full 3 months (or more, in particularly complex cases), and apply in advance accordingly. We’ll let you know as soon as we can if we anticipate a longer than 3-month decision timeline. [emphasis in original]

  • For $500k+ projects, I think a 3-month turnaround time is more defensible, though I do personally wish we generally had faster response times. 

I work at Open Philanthropy, and I recently let Gavin know that Open Phil is planning to recommend a grant of $5k to Arb for the second project on your list: Overview of AI Safety in 2024 (they had already raised ~$10k by the time we came across it). Thanks for writing this post Austin — it brought the funding opportunity to our attention.

Like other commenters on Manifund, I believe this kind of overview is a valuable reference for the field, especially for newcomers. 

I wanted to flag that this project would have been eligible for our RFP for work that builds capacity to address risks from transformative AI. I worry that not all potential applicants are aware of the RFP or its scope, so I’ll take this opportunity to mention that this RFP’s scope is quite broad, including funding for: 

  • Training and mentorship programs
  • Events
  • Groups
  • Resources, media, and communications
  • Almost any other type of project that builds capacity for advanced AI risks (in the sense of increasing the number of careers devoted to these problems, supporting people doing this work, and sharing knowledge related to this work). 

More details at the link above. People might also find this page helpful, which lists all currently open application programs at Open Phil. 

There are two very similar pages. This one and https://www.lesswrong.com/tag/scoring-rules/

By "refining pure human feedback", do you mean refining RLHF ML techniques? 

I assume you still view enhancing human feedback as valuable? And also more straightforwardly just increasing the quality of the best human feedback?

Amazing! Thanks so much for making this happen so quickly.

To anyone who's trying to figure out how to get it to work on Google Podcasts, here's what worked for me (searching the name didn't, maybe this will change?):

Go to the Libsyn link. Click the RSS symbol. Copy the link. Go to Google Podcasts. Click the Library tab (bottom right). Go to Subscriptions. Click symbol that looks like adding a link in the upper right. Paste link, confirm.

Hey Paul, thanks for taking the time to write that up, that's very helpful!

Hey Rohin, thanks a lot, that's genuinely super helpful. Drawing analogies to "normal science" seems both reasonable and like it clears the picture up a lot.

Load More