Existential risk reduction is cool and high status, whereas averting global poverty is not.
What?? If this is true, please pass along the message to the Gates Foundation, the United Nations, the World Economic Forum, and... almost everyone else on the planet.
Yes, I was going to say... How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
I wish I lived on a planet where these were 'very common appeals to commonsense'. I wonder how much a ticket there would cost?
I think it might be more for a select group of people. In the LW community, I have gotten the impression that existential risk is higher status than global poverty reduction - that's definitely the opinion of the high status people in this community. And maybe for the specific kind of nonconformist nerd who reads Less Wrong and is likely to come across this post, transhumanism and existential risk reduction has a "coolness factor" that global poverty reduction doesn't have.
You're definitely right about the wider world, but many people might only care about the opinions of the 100 or so members of their in-group.
This. Status matters within one's in-group or a group one wants to be accepted as an in-group member by.
I feel like you're just sneering at a very small point I made rather than actually engaging with it.
What I meant to say was (1) x-risk reduction is cooler and higher status in the effective altruist / LessWrong community and (2) this biases people at least a little bit. I'll edit the essay to reflect that.
Would you agree with (1)? What about (2)?
If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I'm not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven't publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that's a pretty small world. There's a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
Agreed.
Search for 'million donation' on news.google.com, first two pages:
Every time I hear a dollar amount on the news, I cringe at realizing how pathetic spending on existential risks is by comparison.
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn't follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn't become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.
This post had an odd effect on me. I agreed with almost everything in it, as it matches my own logic and intuitions. Then I realized that I strongly disliked the logic in your anti-meat post, because it appeared so severely biased toward a predefined conclusion "eating meat is ethically bad". So, given the common authorship, I must face the possibility that the quality of the two posts is not significantly different, and it's my personal biases which make me think that it is. As a result, I am now slightly more inclined to consider the anti-meat arguments seriously and slightly less inclined to agree with the arguments from this post, even though the foggy future and the lack of feedback arguments make a lot of sense.
EDIT: Hmm, whatever shall I do with 1 Eliezer point and 1 Luke point...
+1 for correct, evenhanded use of the genetic heuristic (what we call the genetic fallacy when we agree with its usage).
My first reading: ‘We call the genetic heuristic “the genetic fallacy” when we agree with its usage.’
The intended reading: ‘We call the genetic fallacy “the genetic heuristic” when we agree with its usage.’
A few comments and requests for clarification:
Are you only talking about donations? Do you think it would also be a mistake to work on speculative causes? (That seems much different in that there are many more opportunities for learning from working on a cause than from donating to it.) I think you are on much stronger ground with claims about donations, though I think it can make sense for other people with the right kind of information to fund other opportunities. E.g., think about early GiveWell. I wouldn't want to buy into something which said giving to early GiveWell was a bad idea, even for people who had a lot of information about the project. I think some people may be in a similar position for funding some early EA orgs, and they shouldn't be discouraged from funding them.
What counts as a "speculative cause"? Meta-research? Political advocacy? Education for talented youth? Climate change? Prioritization work? Funding early GiveWell? Funding 80K? Anyone who says the thing they are doing is somehow improving very long-run outcomes? Anything that hasn't been recommended by GiveWell? Anything that hasn't been proven to work with RCT-quality evidence? Asking for a
Yeah, #3 is my own biggest question for the OP. If you care about the far future, then it seems like the case for MIRI+FHI's positive effect on the far future has been made more robustly than the case for AMF's effect on the far future has been made, even though it's still true that in general we are pretty ignorant about how to affect the far future in positive ways.
Your last point doesn't make much sense to me. I agree that we should be concerned about purchasing as much impact as we can, but the amount of impact you're purchasing from AMF is minuscule compared to the far future. It seems like your concern for something being 'proven' is skewing your choices.
It's like walking into a shop and buying the first TV (or whatever) that you see, despite it likely being expensive and not nearly as good as other ones, because you can at least see it, rather than doing a bit of looking on Amazon.
Would you play a lottery with no stated odds?
Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?
Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.
But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?
Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.
The reason not to play a lottery is because it is a zero-sum game in which the rules are set by the other agent; since you know that the other player's goal is to make ...
These were my thoughts when I read this.
A better analogy might be buying stock in a technology startup which is making a product completely unlike anything on the market now. It is certainly more risky than the sure thing, with lots of potential for losing your investment, but also has a much much higher potential payoff. This is generally the case in any sort of investing, whether it be investing in a charity or in a business -- the higher the risk, the higher the potential gain. The sure stuff generally has plenty of funding already -- the low hanging fruit has already been taken.
That being said, one should be on the lookout for good investing opportunities of both kinds -- charging more (in terms of expected payoff) for the riskier ones but not shunning either completely.
I agree with the most of the points points here. Short-term cost-effective charities are often more worthy of donation than speculative long-term high-uncertainty ones. I would prefer donating to GiveWell's top charities to funding US/China exchange programs.
Yet I still donate to MIRI. Why? Because it's a completely different beast.
I don't view MIRI as addressing far-future concerns. I view MIRI as addressing one very specific problem: we are on the path to AI, and we seem to be a lot closer to developing AI than we are to developing a perfect reflective preservable logical encoding of all human values.
There's a timer. That timer is getting uncomfortably low. And when it gets to zero, there's not a lot of death and a bad economy -- there's an extinction event.
If we had good reason to believe that the US and China will cross a threshold this century causing them to either blow up the world or collaborate and travel to the stars, based solely on the sentiment of each population towards the other, then you're damn right I'd fund exchange programs.
We don't have any evidence along those lines. There are a plethora of potential political catastrophes and uncountable factors that could ca...
James Shanteau found in "Competence in Experts: The Role of Task Characteristics"...
Good to see that paper being given an airing. But one important thing that must be done is to decompose the problems we're working on: some results may be more solid that others. I've shown that using expert opinion to establish AI timelines is nearly worthless. However you can still get some results about the properties of AIs (see for instance Omahundro's AI-drives paper), and these are far more solid (for one, they depend much more on arguments than on expertise). So we're in the situation of having no clue when and how AIs could emerge, but being fairly confident that there's a high risk if they do.
Compare for instance the economics of the iPhone. We failed to predict the iPhone ahead of time (continually predicting that these kinds of thing were just around the corner or in the far future), but the iPhone didn't escape the laws of economics and copying and competition. We can often say something about things, even if we must fail to say everything.
Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.
Were the social interventions sampled randomly, or were they chosen for their counterintuitive outcomes?
Anyway, maybe if one wants to get good at reducing existential risk, the first thing to do is to start using PredictionBook and continue until one is good enough to have a reliable track record, then proceed from there.
"Conservative Orders of Magnitude" Arguments
I wanted to highlight this one, because as someone who isn't an expert in any of these fields it's an easy one for me to fall for.
My rule of thumb is whenever I find myself applying a fudge-factor of N to a system "to be safe," I should ask myself "why not 10N instead? why not 100N?"
If my only answer is incredulity, I should immediately halt and dump core; my processing has become corrupted.
"Wow factor" bias.
That's worth keeping in mind - that's certainly what pushed me into working for the FHI.
How do we choose between different proposals for "exploration"? For example the workshops that MIRI is currently hosting on a regular basis seem to be designed largely to learn how to most efficiently produce FAI-related research as well as how productive such research efforts can currently be. On the other hand, I suggest that at this stage we should devote more resources into what I call "Singularity Strategies" research, to better understand for example whether pushing for "FAI first" has positive or negative expected impac...
Selection bias. When trying to find trends in history that are favorable for affecting the far future, some examples can be provided. However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again. This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.
When I was talking about trends in history, I was saying that certain factors could be identified which would systematically lead to ...
Question: Suppose MIRI was like one of the 8 charities listed above (i.e. intuitively plausible, but empirically useless). How would we know? How would this MIRI' be different from MIRI today?
I think this question is too vague. MIRI could turn out to be useless for any number of reasons, leading to different empirical disconfirmations. (A lot of these will look like the end of human life.) E.g.:
MIRI is useless because FAI research is very useful, but MIRI's basic methodology or research orientation is completely and irredeemably the wrong approach to FAI. Expected evidence: MIRI's research starts seeing diminishing returns, even as they attempt a wide variety of strategies; non-MIRI researchers make surprising amounts of progress into the issues MIRI is interested in; reputable third parties that assess MIRI's results consistently disagree with its foundational assumptions or methodology; the researchers MIRI attracts come to be increasingly seen by the academic establishment as irrelevant to FAI research or even as cranks, for substantive, mathy reasons.
MIRI is useless because FAI is impossible -- we simply lack the resources to engineer a benign singularity, no matter how hard we try. Expected evidence: Demonstrations that a self-modifying AGI can't have stable, predictable values; demonstrations that coding anything like indirect normativity is unfeasible; increa
Explicit reviews of MIRI as an organization aren't the only kind of review of MIRI. It also counts as a review of MIRI, at least weakly, if anyone competent enough to evaluate any of MIRI's core claims comes out in favor of (or opposition to) any of those claims, or chooses to work with MIRI. David Chalmers' The Singularity: A Philosophical Analysis and the follow-up collectively provide very strong evidence that analytic philosophers have no compelling objection to the intelligence explosion prediction, for example, and that a number of them share it. Reviews of MIRI's five theses and specific published works are likely to give us better long-term insight into whether MIRI's on the right track (relative to potential FAI researcher competitors) than a review focused on, e.g., MIRI's organizational structure or use of funding, both of which are more malleable than its basic epistemic methodology and outlook.
It's also important to keep in mind that the best way to figure out whether MIRI's useless is probably to fund MIRI. Give them $150M, earmarked for foundational research that will see clear results within a decade, and wait 15-20 years. If what they're doing is useless, it will b...
I don't think it's good practice to mix in "wow factor" bias into that list. That list is mostly made up of terms drawn from the psychology literature of empirically demonstrated deviations from rational behaviour that are predicted my some mathematical model, but for that one, this article is the top hit and no formal meaning has been assigned to it, never mind empirically demonstrated.
If existential risks are hugely important but we suck at predicting them, why not invest in schemes which improve our predictions?
I predict: 1,2,3,5
Huh?
That wouldn't be cool, it would be very Ted Kaczynski-ish.
Is not embarrassing yourself by looking a bit like Ted Kaczynski your only terminal value?
Just because you're making an AGI that can't be "proven friendly," it might be friendly.
I suggest thinking inside the box more. The hypothetical is 'What if FAI is impossible?', not 'What if we can't prove that FAI is actual?'. But all your suggestions are attempts to resist that premise, not attempts to explore its consequences. One of the basic concerns of the EA mov...
Well, curing cancer might be more important than finding a cure for common cold, but that doesn't necessarily mean you should be trying to cure cancer instead of trying to get rid of common cold, unless of course you have some inner quality that makes you uniquely capable of curing cancer. There are other considerations.
Reducing existential risks is important. But suppose it is not as important as ending world poverty. There's also lot of uncertainty. It may be that no matter how hard we try, something will come out of the blue and kill us all (three hours...
Another note: increasing education and speeding up economic development is actually a very very important form of charity, as far as I can tell. So important that the government collects taxes which are used to provide public education and foreign aid. If there was no public education or foreign aid in the world today, I would strongly consider donating to educational charities and economic charities instead of GiveWell's current top charities.
Unless you think public education and foreign aid should be completely de-funded in order to save more lives then ...
I'll make the common sense observation that if population growth continues without any progress in space colonization or other highly speculative projects, the Malthusian trap will eventually again become an existential risk in on way or another, and environmental problems might be early signs of this.
Passing over unproven causes, we're left with promoting family planning and empowerement/education of women. Are there any Givewell-endorsed charities with a proven track record of limiting population growth by these or any other means? How is this effectiven...
Since living in Oxford, one of the centers of the "effective altruism" movement, I've been spending a lot of time discussing the classic “effective altruism” topic -- where it would be best to focus our time and money.
Some people here seem to think that the most important thing we should be focusing our time and money on are speculative projects, or projects that promise a very high impact, but involve a lot of uncertainty. One such very common example is "existential risk reduction", or attempts to make a long-term far future for humans more likely, say by reducing the chance of things that would cause human extinction.
I do agree that the far future is the most important thing to consider, by far (see papers by Nick Bostrom and Nick Beckstead). And I do think we can influence the far future. I just don't think we can do it in a reliable way. All we have are guesses about what the far future will be like and guesses about how we can affect it. All of these ideas are unproven, speculative projects, and I don't think they deserve the main focus of our funding.
While I waffled in cause indecision for a while, I'm now going to resume donating to GiveWell's top charities, except when I have an opportunity to use a donation to learn more about impact. Why? My case is that speculative causes, or any cause with high uncertainty (reducing nonhuman animal suffering, reducing existential risk, etc.) requires that we rely on our commonsense to evaluate them with naīve cost-effectiveness calculations, and this is (1) demonstrably unreliable with a bad track record, (2) plays right into common biases, and (3) doesn’t make sense based on how we ideally make decisions. While it’s unclear what long-term impact a donation to a GiveWell top charity will have, the near-term benefit is quite clear and worth investing in.
Focusing on Speculative Causes Requires Unreliable Commonsense
How can we reduce the chance of human extinction? It just makes sense that if we fund cultural exchange programs between the US and China, there will be more goodwill for the other within each country, and therefore the countries will be less likely to nuke each other. Since nuclear war would likely be very bad, it's of high value to fund cultural exchange programs, right?
Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
Can You Pick the Winning Social Program?
Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.
I'll wait for you to take the quiz first... doo doo doo... la la la...
Ok, welcome back. I don't know how well you did, but success on this quiz is very rare, and this poses problems for commonsense. Sure, I'll grant you that Scared Straight sounds pretty suspicious. But the Even Start Family Literacy Program? It just makes sense that providing education to boost literacy skills and promote parent-child literacy activities should boost literacy rates, right? Unfortunately, it was wrong. Wrong in a very counter-intuitive way. There wasn't an effect.
GiveWell and Commonsense's Track Record of Failure
Commonsense actually has a track record of failure. GiveWell has been talking about this for ages. Every time GiveWell has found an intervention hyped by commonsense notions of high-impact and they've looked at it further, they've ended up disappointed.
The first was the Fred Hollows Foundation. A lot of people had been repeating the figure that the Fred Hollows Foundation could cure blindness for $50. But GiveWell found that number suspect.
The second was VillageReach. GiveWell originally put them as their top charity and estimated them as saving a life for under $1000. But further investigation kept leading them to revise their estimate until ultimately they weren't even sure if VillageReach had an impact at all.
Third, there is deworming. Originally, deworming was announced as saving a year of healthy life (DALY) for every $3.41 spent. But when GiveWell dove into the spreadsheets that resulted in that number, they found five errors. When the dust settled, the $3.41 figure was found to actually be off by a factor of 100. It was revised to $326.43.
Why shouldn't we expect this trend to not be the case in other areas where calculations are even looser and numbers are even less settled, like efforts devoted to speculative causes? Our only recourse is to fall back on interventions that are actually studied.
People Are Notoriously Bad At Predicting the (Far) Future
Cost-effectiveness estimates also frequently require making predictions about the future. Existential risk reduction, for example, requires making predictions about what will happen in the far future, and how your actions are likely to effect events hundreds of years down the road. Yet, experts are notoriously bad at making these kinds of predictions.
James Shanteau found in "Competence in Experts: The Role of Task Characteristics" (see also Kahneman and Klein's "Conditions for Intuitive Expertise: A Failure to Disagree") that experts perform well when thinking about static stimuli, thinking about things, and when there is feedback and objective analysis available. Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.
Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future? Our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction (or invest money in getting better at making predictions).
Even Broad Effects Require Specific Attempts
One potential resolution to this problem is to argue for “broad effects” rather than “specific attempts”. Perhaps it’s difficult to know whether a particular intervention will go well or mistaken to focus entirely on Friendly AI, but surely if we improved incentives and norms in academic work to better advance human knowledge (meta-research), improved education, or advocated for effective altruism, the far future would be much better equipped to handle threats.
I agree that these broad effects would make the far future better and I agree that it’s possible to implement these broad effects and change the far future. The problem, however, is it can’t be done in an easy or well understood way. Any attempt to implement a broad effect would require a specific action that has an unknown expectation of success and unknown cost-effectiveness. It’s definitely beneficial to advocate for effective altruism, but could this be done in a cost-effective way? A way that’s more cost-effective at producing welfare than AMF? How would you know?
In order to accomplish these broad effects, you’d need specific organizations and interventions to channel your time and money into. And by picking these specific organizations and interventions, you’re losing the advantage of broad effects and tying yourself to particular things with poorly understood impact and no track record to evaluate.
Focusing on Speculative Causes Plays Into Our Biases
We've now known for quite a long time that people are not all that rational. Instead, human thinking fails in very predictable and systematic ways. Some of these ways make us less likely to take speculative causes seriously, such as ambiguity aversion, the absurdity heuristic, scope neglect, and overconfidence bias.
But there’s also a different side of the coin, with biases that might make people think badly about existential risk:
Optimism bias. People generally think things will turn out better than they actually will. This could lead people to think that their projects will have a higher impact than they actually will, which would lead to higher estimates of cost-effectiveness than is reasonable.
Control bias. People like to think they have more control over things than they actually do. This plausibly also includes control over the far future. Therefore, people are probably biased into thinking they have more control over the far future than they actually do, leading to higher estimates of ability to influence the future than is reasonable.
"Wow factor" bias. People seem attracted to more impressive claims. Saving a life for $2500 through a malaria bed net seems much more boring compared to the chance of saving the entire world by averting a global catastrophe. Within the Effective Altruist / LessWrong community, existential risk reduction is cool and high status, whereas averting global poverty is not. This might lead to more endorsement of existential risk reduction than is reasonable.
Conjunction fallacy. People have a problem assessing probability properly when there are many steps involved, each of which has a chance of not happening. Ten steps, each with an independent 90% success rate, has only a 35% chance of success. Focusing on the far future seems to involve that a lot of largely independent events happen the way that is predicted. This would mean people are worse at estimating their chances of helping the far future, creating higher cost-effectiveness estimates than is reasonable.
Selection bias. When trying to find trends in history that are favorable for affecting the far future, some examples can be provided. However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again. This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.
It’s concerning there are numerous biases both weighted in favor and weighted against speculative causes, and this means we must tread carefully when assessing their merits. However, I would strongly expect biases to be even worse in favor of speculative causes rather than against them, because speculative causes lack the available feedback and objective evidence needed to help insulate against bias, whereas a focus on global health does not.
Focusing on Speculative Causes Uses Bad Decision Theory
Furthermore, not only is the case for speculative causes undermined by a bad track record and possible cognitive biases, but the underlying decision theory seems suspect in a way that's difficult to place.
Would you play a lottery with no stated odds?
Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?
Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.
But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?
Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.
"Conservative Orders of Magnitude" Arguments
In response to these considerations, I've seen people endorsing speculative causes look at their calculations and remark that even if their estimate were off by 1000x, or three orders of magnitude, they still would be on solid ground for high impact, and there's no way they're actually off by three orders of magnitude. However, Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don't offers a cautionary tale:
Silver points out that when estimating how safe mortgage backed securities were, the difference between assuming defaults are perfectly uncorrelated and defaults are perfectly correlated is a difference of 160,000x in your risk estimate -- or five orders of magnitude.
If these kinds of five-orders-of-magnitude errors are possible in a realm that has actual feedback and is moderately understood, how do we know the estimates for cost-effectiveness are safe for speculative causes that are poorly understood and offer no feedback? Again, our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction.
Value of Information, Exploring, and Exploiting
Of course, there still is one important aspect of this problem that has not been discussed -- value of information -- or the idea that sometimes it’s worth doing something just to learn more about how the world works. This is important in effective altruism too, where we focus specifically on “giving to learn”, or using our resources to figure out more about the impact of various causes.
I think this is actually really important and is not a victim to any of my previous arguments, because we’re not talking about impact, but rather learning value. Perhaps one could look to an "explore-exploit model", or the idea that we achieve the best outcome when we spend a lot of time exploring first (learning more about how to achieve better outcomes) before exploiting (focusing resources on achieving the best outcome we can). Therefore, whenever we have an opportunity to “explore” further or learn more about what causes have high impact, we should take it.
Learning in Practice
Unfortunately, in practice, I think these opportunities are very rare. Many organizations that I think are “promising” and worth funding further to see what their impact looks like do not have sufficiently good self-measurement in place to actually assess their impact or sufficient transparency to provide that information, therefore making it difficult to actually learn from them. And on the other side of things, many very promising opportunities to learn more are already fully funded. One must be careful to ensure that it’s actually one’s marginal dollar that is getting marginal information.
The Typical Donor
Additionally, I don’t think the typical donor is in a very good position to assess where there is high value of information or have the time and knowledge to act upon this information once it is acquired. I think there’s a good argument for people in the “effective altruist” movement to perhaps make small investments in EA organizations and encourage transparency and good measurement in their operations to see if they’re successfully doing what they claim (or potentially create an EA startup themselves to see if it would work, though this carries large risks of further splitting the resources of the movement).
But even that would take a very savvy and involved effective altruist to pull off. Assessing the value of information on more massive investments like large-scale research or innovation efforts would be significantly more difficult, beyond the talent and resources of nearly all effective altruists, and are probably left to full-time foundations or subject-matter experts.
GiveWell’s Top Charities Also Have High Value of Information
As Luke Muehlhauser mentions in "Start Under the Streetlight, Then Push Into the Shadows", lots of lessons can be learned only by focusing on the easiest causes first, even if we have strong theoretical reasons to expect that they won’t end up being the highest impact causes once we have more complete knowledge.
We can use global health cost-effectiveness considerations as practice for slowly and carefully moving into the more complex and less understood domains. There even are some very natural transitions, such as beginning to look at "flow through effects" of reducing disease in the third-world and beginning to look at how more esoteric things affect the disease burden, like climate change. Therefore, even additional funding for GiveWell’s top charities has high value of information. And notably, GiveWell is beginning this "push" through GiveWell Labs.
Conclusion
The bottom line is that sometimes things look too good to be true. Therefore, I should expect that the actual impact of speculative causes that make large promises, upon a thorough investigation, will be much lower.
And this has been true in other domains. People are notoriously bad at estimating the effects of causes in both the developed world and developing world, and those are the causes that are near to us, provide us with feedback, and are easy to predict. Yet, from the Even Start Family Literacy Program to deworming estimates, our commonsense has failed us.
Add to that the fact that we should expect ourselves to perform even worse at predicting the far future. Add to that optimism bias, control bias, "wow factor" bias, and the conjunction fallacy, which make it difficult for us to think realistically about speculative causes. And then add to that considerations in decision theory, and whether we would bet on a lottery with no stated odds.
When all is said and done, I'm very skeptical of speculative projects. Therefore, I think we should be focused on exploring and exploiting. We should do whatever we can to fund projects aimed at learning more, when those are available, but be careful to make sure they actually have learning value. And when exploring isn’t available, we should exploit what opportunities we have and fund proven interventions.
But don’t confuse these two concepts and fund causes intended for learning because of their actual impact value. I’m skeptical about these causes actually being high impact, though I’m open to the idea that they might be and look forward to funding them in the future when they become better proven.
-
Followed up in: "What Would It Take To 'Prove' A Skeptical Cause" and "Where I've Changed My Mind on My Approach to Speculative Causes".
This was also cross-posted to my blog and to effective-altruism.com.
I'd like to thank Nick Beckstead, Joey Savoie, Xio Kikauka, Carl Shulman, Ryan Carey, Tom Ash, Pablo Stafforini, Eliezer Yudkowsky, and Ben Hoskin for providing feedback on this essay, even if some of them might strongly disagree with it's conclusion.