This is a summary made by Katja of points made by Alexander Berger during a conversation about GiveWell Labs and cause prioritization research on March 5th 2014.
GiveWell Labs
Focus
GiveWell Labs is trying to answer the same basic question as GiveWell: “what’s the best way to spend money?” However GiveWell Labs is answering this question for larger amounts of money, which is less straightforward. Causes are a more useful unit than charities for very large donors. So instead of trying to figure out which charity one should give to this year, they are asking which program areas a foundation should work on.
Givewell’s relationship with Good Ventures is a substantial reason for focussing on the needs of big donors, and GiveWell Labs research has been done in partnership with Good Ventures. The long term vision is to have ongoing research into which areas that should be covered are not, while providing support for a wide range of foundations working on problems they have previously identified as important.
Approach to research
GiveWell Labs primarily aggregates information, rather than producing primary research. It also puts a small amount of effort into publicizing its research.
Their research process focuses on answering these questions:
how important is the issue?
how tractable is it?
how crowded is it?
They attempt to answer the questions at increasing levels of depth for a variety of areas. It is not certain that these are key criteria for determining returns through a program, but they seems correct intuitively.
Most research is done through speaking to experts (rather than e.g. reading research papers). The ‘importance’ question is the only one likely to have academic research on it.
The learning process
GiveWell Labs is prioritizing learning value and diversification at the moment, and not aiming to make decisions about cause priorities once and for all. Alexander would guess that the impact of GiveWell Labs’ current efforts is divided roughly equally between immediate useful research output and the value of trying this project and seeing how it goes.
In the time it has existed, GiveWell Labs has learned a lot. A big question at the moment is how much confidence to have in a cause before making the choice to dive into deeper research on it.
Spending money
Starting to spend money is probably a big part of diving deeper. Spending money is useful for learning more about an area for two reasons. Firstly, it makes you more credible. Secondly, it encourages people to make proposals. People don’t tend to have proposals readily formulated. They respond to the perception of concrete available funding. This means you will get a better sense of the opportunities if you are willing to spend money.
Transferability of learning
Alexander doesn’t know whether methodological insights discovered in one cause prioritization effort are likely to be helpful to others. One relevant factor is that people at GiveWell Labs have priors about what’s likely to be successful that are partly based on what they have learned before starting the process. But if you didn’t share the starting priors, you might not end up with the same current beliefs. This might be true regarding explicit expected value calculations, and how to weigh robustness or reliability against a high upside, in particular. If you don’t share the same prior, the lessons learned may not be very communicable.
Funding cause prioritization
Adding resources to GiveWell
An outside funder trying to donate to GiveWell Labs couldn’t change the distribution from GiveWell’s conventional research to GiveWell Labs. It would also be hard to change the total work done by donating. Donating would mainly change the amount of time GiveWell would spend raising other funds, and the amount to which they depend on Good Ventures.
Other cause prioritization efforts
Projects like Katja’s cause prioritization shallow investigation are unlikely to be done by Givewell.
Katja’s Structured Case on AI project is also unlikely to overlap based on GiveWell’s current plans. If Alexander were working on something like this, he would typically initially try to effectively aggregate the views of credible people, rather than initially forming object level views. For instance, he would like to know what would happen if Eliezer could sit down with highly credentialed AI researchers and try to convince them of his view. The AI Structured Case on the other hand is more directed at detailing object level arguments.
Cause prioritization work can become fairly abstract. Givewell Labs tries to keep it grounded in looking for concrete funding opportunities. Others may have comparative advantages in more philosophical investigations, which avoids overlapping, but is also less likely to be informative to GiveWell Labs. GiveWell is unlikely to focus on prediction markets, though it’s not out of the question.
General considerations for funding such research
If others were going to do more concrete work, it is a hard question whether it would be better at this point for them to overlap with GiveWell Labs to provide a check, or avoid overlapping to provide broader coverage.
Answering high level questions such as ‘how good is economic growth?’ doesn’t seem very decision relevant in most cases. This is largely because these issues are hard to pin down, rather than because they are unlikely to make a large difference to evaluations if we could pin them down, though Alexander is also doubtful that they would make a large difference. For instance, Alexander doesn’t expect indirect effects of interventions to be large relative to immediate effects, while Holden Karnofsky (co-executive director of GiveWell) does, but their views on this do not seem to play a big role in their disagreements over what GiveWell Labs should prioritize.
When deciding what to do on cause prioritization, it is important to keep in mind how it will affect anything, such as who will pay attention, and what decisions they will change as a result.
Similar projects
Nick Beckstead and Carl Shulman do similar work in their own time.
Alexander’s understanding is that Copenhagen Consensus Center is doing something a bit different, especially around modeling cost effectiveness estimates. They also seem to be less focussed on influencing specific decisions.
Alexander is not aware of any obvious further people one should talk to that Katja has not thought of.
This post summarizes a conversation which was part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, Paul Penley, and Gordon Irlam.
Participants
Alexander Berger: Senior Research Analyst, GiveWell
Katja Grace: Research Assistant, Machine Intelligence Research Institute
Notes
Since this conversation, GiveWell Labs has become the Open Philanthropy Project.
This is a summary made by Katja of points made by Alexander Berger during a conversation about GiveWell Labs and cause prioritization research on March 5th 2014.
GiveWell Labs
Focus
GiveWell Labs is trying to answer the same basic question as GiveWell: “what’s the best way to spend money?” However GiveWell Labs is answering this question for larger amounts of money, which is less straightforward. Causes are a more useful unit than charities for very large donors. So instead of trying to figure out which charity one should give to this year, they are asking which program areas a foundation should work on.
Givewell’s relationship with Good Ventures is a substantial reason for focussing on the needs of big donors, and GiveWell Labs research has been done in partnership with Good Ventures. The long term vision is to have ongoing research into which areas that should be covered are not, while providing support for a wide range of foundations working on problems they have previously identified as important.
Approach to research
GiveWell Labs primarily aggregates information, rather than producing primary research. It also puts a small amount of effort into publicizing its research.
Their research process focuses on answering these questions:
They attempt to answer the questions at increasing levels of depth for a variety of areas. It is not certain that these are key criteria for determining returns through a program, but they seems correct intuitively.
Most research is done through speaking to experts (rather than e.g. reading research papers). The ‘importance’ question is the only one likely to have academic research on it.
The learning process
GiveWell Labs is prioritizing learning value and diversification at the moment, and not aiming to make decisions about cause priorities once and for all. Alexander would guess that the impact of GiveWell Labs’ current efforts is divided roughly equally between immediate useful research output and the value of trying this project and seeing how it goes.
In the time it has existed, GiveWell Labs has learned a lot. A big question at the moment is how much confidence to have in a cause before making the choice to dive into deeper research on it.
Spending money
Starting to spend money is probably a big part of diving deeper. Spending money is useful for learning more about an area for two reasons. Firstly, it makes you more credible. Secondly, it encourages people to make proposals. People don’t tend to have proposals readily formulated. They respond to the perception of concrete available funding. This means you will get a better sense of the opportunities if you are willing to spend money.
Transferability of learning
Alexander doesn’t know whether methodological insights discovered in one cause prioritization effort are likely to be helpful to others. One relevant factor is that people at GiveWell Labs have priors about what’s likely to be successful that are partly based on what they have learned before starting the process. But if you didn’t share the starting priors, you might not end up with the same current beliefs. This might be true regarding explicit expected value calculations, and how to weigh robustness or reliability against a high upside, in particular. If you don’t share the same prior, the lessons learned may not be very communicable.
Funding cause prioritization
Adding resources to GiveWell
An outside funder trying to donate to GiveWell Labs couldn’t change the distribution from GiveWell’s conventional research to GiveWell Labs. It would also be hard to change the total work done by donating. Donating would mainly change the amount of time GiveWell would spend raising other funds, and the amount to which they depend on Good Ventures.
Other cause prioritization efforts
Projects like Katja’s cause prioritization shallow investigation are unlikely to be done by Givewell.
Katja’s Structured Case on AI project is also unlikely to overlap based on GiveWell’s current plans. If Alexander were working on something like this, he would typically initially try to effectively aggregate the views of credible people, rather than initially forming object level views. For instance, he would like to know what would happen if Eliezer could sit down with highly credentialed AI researchers and try to convince them of his view. The AI Structured Case on the other hand is more directed at detailing object level arguments.
Cause prioritization work can become fairly abstract. Givewell Labs tries to keep it grounded in looking for concrete funding opportunities. Others may have comparative advantages in more philosophical investigations, which avoids overlapping, but is also less likely to be informative to GiveWell Labs. GiveWell is unlikely to focus on prediction markets, though it’s not out of the question.
General considerations for funding such research
If others were going to do more concrete work, it is a hard question whether it would be better at this point for them to overlap with GiveWell Labs to provide a check, or avoid overlapping to provide broader coverage.
Answering high level questions such as ‘how good is economic growth?’ doesn’t seem very decision relevant in most cases. This is largely because these issues are hard to pin down, rather than because they are unlikely to make a large difference to evaluations if we could pin them down, though Alexander is also doubtful that they would make a large difference. For instance, Alexander doesn’t expect indirect effects of interventions to be large relative to immediate effects, while Holden Karnofsky (co-executive director of GiveWell) does, but their views on this do not seem to play a big role in their disagreements over what GiveWell Labs should prioritize.
When deciding what to do on cause prioritization, it is important to keep in mind how it will affect anything, such as who will pay attention, and what decisions they will change as a result.
Similar projects
Nick Beckstead and Carl Shulman do similar work in their own time.
Alexander’s understanding is that Copenhagen Consensus Center is doing something a bit different, especially around modeling cost effectiveness estimates. They also seem to be less focussed on influencing specific decisions.
Alexander is not aware of any obvious further people one should talk to that Katja has not thought of.