The Lightspeed application asks: “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”
LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”).
I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail. But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.
I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).
I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.
My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start.
More on the costs of the question
Pushes away the most motivated people
Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad.
[*Although there are also downsides to organizers with sufficiently bad epistemics.]
Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
Vulnerable to grift
You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems.
Punishes underconfidence
Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated.
Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
Corrupts epistemics
Not that much. But I think it’s pretty bad if people are forced to choose between "play the game of exaggerating impact" and "go unfunded". Even if the game is in fact learnable, it's a bad use of their time and weakens the barriers to lying in the future.
Pushes projects to grow beyond their ideal scope
Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.
This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
Rewards cultural knowledge independent of merit
There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals.
Brainstorming fixes
I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of.
- Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption.
- Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
- E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
- This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
- I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope.
- Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed.
- Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
- My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
- Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
- This is tricky because grants don’t necessarily mean a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable.
- E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism.
- Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
- When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are. You can have an optional question asking for the effects of meeting the goal.
- I like the word "effects" more than "impact", which is a pretty loaded term within EA and x-risk.
- A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that.
- Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with.
- [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
- [Your idea here]
I hope the forms change to explicitly encourage things like the above list, but I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).
Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I'm also very interested in why you like the current forms, and what constraints shaped them.
Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It's relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It's hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted "look lots of people you care about do BOTECs and I can clearly make BOTECs better", without a more detailed vision of impact.
I think it's plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it's good to have multiple people valuing different things. My complaint is that I think the existing applications don't make it obvious that that's an okay pitch to make. My goal is some combination of "get the forms changed to make it more obvious that this kind of pitch is okay" and "spread the knowledge that that this can work even if the form seems like the form wants something else".
In terms of me personally... I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it's plausible that "give Elizabeth money to exude rigor and talk to people" would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I'm definitely going to do these specific things and if I don't have committed a moral fraud... So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn't had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That's probably a bigger deal than wording on a form.
where by good I mean "more impactful in expectation than the marginal project funded".
I have gotten marginal exclusive retreat invites on the theory that "look she's not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality", and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn't hear about until later.
relevant features of this category: doing lots of small projects that don't make sense to lump together, scrupulous about commitments to the point it's easy to create poor outcomes, have enough runway that it doesn't matter when I get paid and I can afford to gamble on projects.
although the part where I count as "not ambitious" is a huge selection effect.