Follow up to: Why I'm Skeptical About Unproven Causes (And You Should Be Too)

-

My previous essay Why I'm Skeptical About Unproven Causes (And You Should Be Too) generated a lot of discussion here and on the Effective Altruist blog.  Some related questions that came up a lot was: what does it take to prove a cause?  What separates "proven" from "speculative" causes?  And how do you get a "speculative" cause to move into the "proven" column?  I've decided that this discussion is important enough that it merits a bit of elaboration at length, so I'm going to do that in this essay.

 

Proven Cause vs. Speculative Cause

My prime example of proven causes are GiveWell's top charities.  These organizations -- The Against Malaria Foundation (AMF), GiveDirectly, and Schistosomiasis Control Initiative (SCI) -- are rolling out programs that have been the target of significant scientific scrutiny.  For example, delivering long-lasting insecticide-treated anti-malaria nets (what AMF does) has been studied by 23 different randomized, controlled trials (RCTs).  GiveWell has also published thorough reviews of all three organizations (see reviews for AMF, GiveDirectly, and SCI).

On the other hand, a speculative cause is a cause where the case is made entirely by intuition and speculation, with zero scientific study.  For some of these causes, scientific study may even be impossible.

 

Now, I think 23 RCTs is a very high burden to meet.  Instead, we should recognize that being "proven" is not a binary yes or no, but rather a sliding scale.  Even AMF isn't proven -- there still are some areas of concern or potential weaknesses in the case for AMF. Likewise, other organizations working in the area, like Nothing But Nets, also are nearly as proven, but don't have key elements of transparency and track record to make myself confident enough.  And AMF is a lot more proven that GiveDirectly, which is potentially more proven than SCI given recent developments in deworming research.

Ideally, we'd take a Bayesian approach, where we have a certain prior estimate about how cost-effective the organization is, and then update our cost-effectiveness estimate based on additional evidence as it comes in.  For reasons I argued earlier and GiveWell has argued in "Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased)", "Maximizing Cost-Effectiveness Estimates via Critical Inquiry"</a>, "Some Considerations Against More Investment in Cost-Effectiveness Estimates", I think our prior estimate should be quite skeptical (i.e. expect cost-effectiveness to be not as good as AMF / much closer to average than naïvely estimated) until proven otherwise.

 

Right now, I consider AMF, GiveDirectly, and SCI to be the only sufficiently proven interventions, but I'm open to other organizations also entering this area.  Of course, this doesn't mean that all other organizations must be speculative -- instead there is a middle ground of organizations that are neither speculative or "sufficiently proven".

 

From Speculative to Proven

So how does a cause become proven?  Through more measurement.  I think this is best described through examples:

Vegan Outreach and The Humane League work to advertise people reducing the amount of meat in their diets in order to avoid cruelty in factory farms.  They do this through leafleting and Facebook ads.  Naïve cost-effectiveness estimates would guess that, even under rather pessimistic assumptions, this kind of advocacy is very cost-effective, perhaps around $0.02 to $65.92 to reduce one year of suffering on a factory farm.

But we can't be sure enough about this and I don't think this estimate is reliable.  But we can make it better with additional study.  I think that if we ran three or so more studies that were relatively independent (taking place in different areas and run by different researchers), addressed current problems with the studies (like lack of a control group), had longer time-frames and larger sample sizes, and still pointed toward a conversion rate of 1% or more, than I would start donating to this kind of outreach instead, believing it to be "sufficiently proven".

Another example could be 80,000 Hours, an organization that runs careers advice and encourages people to shoot for higher impact careers using their free careers advising and resources.  One could select a group of people that seem like good candidates for careers advice, give them all an initial survey asking them specific things about their current thoughts on careers, and then randomly accept or deny them to get careers advice.  Then follow up with everyone a year or two later and see what initial careers they ended up in, how they got the jobs, and for the group that got advising, how valuable in retrospect the advising was.  With continued follow up, one could measure the difference in expected impact between the two groups and figure out how good 80K is at careers advice.

Perhaps even The Machine Intelligence Research Institute (MIRI) could benefit from more measurement.  The trouble is that it's working on a problem (making sure that advanced artificial intelligence goes well for humanity) that's so distant, it's difficult to get feedback.  But they still potentially could assess the success or failures of their attempt to influence the AI community and they still could try to solicit more external reviews of their work from independent AI experts.  I'm not close enough to MIRI to know whether these would be good or bad ideas, but it seems plausible at first glance that even MIRI could be better measured.

 

And it wouldn't be too difficult to expand this to other areas.  For example, I think GiveWell's tracking of money moved is reliable enough and their commitment to self-evaluation (and external review) strong enough that I would strongly consider funding them before any of their top charities, if they ever had any room for more funding (which they currently do not and urge you to donate to their top charities instead).  Effective Animal Activism could do the same and I think have even higher success, because I think it's moderately likely that if someone starts donating to animal charities after joining EAA, there are few other things that could have influenced them.

Of course, these forms of measurement have their problems, and no measurement -- even two dozen RCTs -- will be perfect.  But some level of feedback and measurement is incredibly necessary to avoid our own biases and failures in naïve estimation.

 

The Proven and The Promising: My Current Donation Strategy

My current donation strategy is to separate organizations into three categories: proven, promising, and not promising.

Proven organizations are the ones that I talked about earlier -- AMF, GiveDirectly, and SCI.

Promising organizations are organizations I think have a hope of becoming proven, someday.  They're organizations practicing interventions that intuitively seem like they would have high upside (like 80K Hours in getting people into better careers and The Humane League in persuading a bunch of people to become vegetarian), have a good commitment to transparency and self-measurement (The Humane League shines here), and have opportunities for additional money to be converted into additional information on their impact.

My goal in donating would be to first ensure the survival of all promising organizations (make sure they have enough funding to stay around) and then try to buy information from promising organizations as much as I can.  For example, I'd be interested in funding more studies about vegetarian outreach or making sure 80K has the money they need to hire a new careers advisor.

Once these needs are met, I'll save a fair amount of my donation to meet future needs down the road.  But then, I'll spend some on proven organizations to (a) achieve impact, (b) continue the incentive for organizations to want to be proven, and (c) show public support for those organizations and donating in general.

...Now I just need to actually get some more money.

-

(This was also cross-posted on my blog.)

I'd like to thank Jonas Vollmer for having the critical conversation with me that inspired this piece.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 7:37 PM

The proposed approach seems biased towards short-term impact because of its simple evaluability. It is unclear, for instance, what the long-term impact of AMF or related charities will be. If we make "proven" refer to long-term impact, no cause would fulfil the requirements and it would come down to evaluating the expected utility (long-term) of the "promising" causes.

Granted, short-term impact is impact, too. But those who accept the arguments that the far future likely dominates by many many orders of magnitude would need to be very certain about long-term impact being virtually impossible to assess at this point. Maybe this case can be made (and Peter Hurford certainly makes a good attempt in his previous post), but I am not yet convinced.

I think we should question the implicit suggestion here that we should focus on "proven" opportunities. The idea that we'll accomplish more if we focus on the best proven opportunities rather than the best unproven opportunities is itself a highly speculative claim, and one that has no firm grounding in common sense, though neither does the opposite view. Looking at track records seems to provide no clear evidence for this idea, and in some ways seems to push against it. My overall impression is that the average impact of people doing the most promising unproven activities contributed to a large share of the innovations and scientific breakthroughs that have made the world so much better than it was hundreds of years ago, despite the fact that they were a small share of all human activity.

I think the key good idea here is that we should focus on rational giving rather than "proven" giving or "quantified" giving.

I agree with your claim that there is a strong case for gathering more information about many promising "unproven" causes and opportunities to do good.

A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter. More critically, you seem to view yourself as somebody capable of changing the former's opinion through (very well-written) restatements of the relevant arguments. But people like me want to know why previous discussions haven't yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least some players can't even figure out why they aren't yet.

Ideally, we'd take a Bayesian approach, where we have a certain prior estimate about how cost-effective the organization is, and then update our cost-effectiveness estimate based on additional evidence as it comes in. For reasons I argued earlier and GiveWell has argued in I think our prior estimate should be quite skeptical (i.e. expect cost-effectiveness to be not as good as AMF / much closer to average than naïvely estimated) until proven otherwise.

The Karnofsky articles have been responded to, with a rather in-depth followup discussion, in this post. It's hardly important to me that you don't consider existential risk charities to defeat expected value criticisms, because Peter Hurford's head is not where I need this discussion to play out in order to convince me. At first glance, and after continued discussion, the arguments appear to me incredibly complex, and possibly too complex for many to even consider. In such cases, sometimes the correct answer demonstrates that the experts were overcomplicating the issue. In others, the laymen were overtrivializing it.

Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense. These arguments tend to be extremely high volume, and offer different conclusions to different audience members with different background assumptions. For those who have ended up advocating X-risk safety, the argument has ceased to be unclear in the epistemological sense, and its philanthropic value is proven.

I'd like to hear more from you, and to to hear arguments laid out for your position in a way that allows me to accept them as relevant to the most weighty concerns of your opponents.

On Criticism of Me

A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter.

I think I am doing persuasive writing (i.e. advocating for my point of view), but I would model myself as talking to an undecided voter or at least someone open minded, not someone stubborn. I'm interested in what in my writing is coming across as indicating I expect a stubborn audience.

More critically, you seem to view yourself as somebody capable of changing the former's opinion through (very well-written) restatements of the relevant arguments.

I think that's the case, yes. But I'm not sure they're restatements as synthesis of many arguments that had not previously been all in one place, and even arguments that had never before been articulated in writing (as is the case in this piece).

But people like me want to know why previous discussions haven't yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least some players can't even figure out why they aren't yet.

It's difficult to offer an answer to that question. I think one problem is many of these discussions haven't (at least as far as I know) taken place in writing yet.

I'd like to hear more from you, and to to hear arguments laid out for your position in a way that allows me to accept them as relevant to the most weighty concerns of your opponents.

I'm confused. What's wrong with how they're currently laid out? Do you think there are certain arguments I'm not engaging with? If so, which ones?

~

On X-Risk Arguments

At first glance, and after continued discussion, the arguments appear to me incredibly complex, and possibly too complex for many to even consider. In such cases, sometimes the correct answer demonstrates that the experts were overcomplicating the issue. In others, the laymen were overtrivializing it.

I don't understand what you're saying here. It sounds like you're advocating for learned helplessness, but I don't think that's the case.

Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense.

What do you mean? Can you give me an example?

For those who have ended up advocating X-risk safety, the argument has ceased to be unclear in the epistemological sense, and its philanthropic value is proven.

I think that's equivocating two different definitions of "proven".

On Criticism of Me

I don't mean to be antagonistic here, and I apologize for my tone. I'd prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.

I'm interested in what in my writing is coming across as indicating I expect a stubborn audience.

The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don't find them to be good objections to your framework. My overall suggestion could be summarized as a plea to take two steps back before making a post, to fill up content not with arguments, but with data about how people think. Summarize background assumptions and trace them to their resultant beliefs about the subject. Link us to existing opinions by people who you might imagine will take issue with your writing. Preempt a comment thread by considering how those existing opinions would conflict with yours, and decide to find that more interesting than the quality of your own argument.

These aren't requirements for a good post. I'm not saying you don't do these things to some extent. They are just things which, if they were more heavily focused, would make your posts much more useful to this data point (me).

It's difficult to offer an answer to that question. I think one problem is many of these discussions haven't (at least as far as I know) taken place in writing yet.

That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target? Do you have a list of posts that are similar, but which lack in a way your Speculative Cause post makes up for?

I'm confused. What's wrong with how they're currently laid out? Do you think there are certain arguments I'm not engaging with? If so, which ones?

Again, this post seems extremely relevant to your Speculative Causes post. This comment and its child are also well written, and link in other valuable sources. Since AI-risk is one of the most-discussed topic here, I would have expected a higher quality response than calling the AI-safety conclusion commonsense.

Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense.

What do you mean? Can you give me an example?

Certain portions of Luke's Story are the best example I can come up with after a little bit of searching through posts I've read at some point in the past. The way he phrases it is slightly different from how I have, but it suggests inferential distance for the AI form of X-Risk might be insurmountably high for those who don't have a similar "aha." Quoted from link:

Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion. I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely.

And Luke's comment (child of So8res') suggests his response to your post would be along the lines of "lots of good arguments built up over a long period of careful consideration." Learned helplessness is the opposite of what I'm advocating. When laymen overtrivialize an issue, they fail to see how somebody who has made it a long-term focus could be justified in their theses.

I think that's equivocating two different definitions of "proven".

It is indeed. I was initially going to protest that your post conflated "proven in the Bayesian sense" and "proven as a valuable philanthropic cause," so I was trying to draw attention to that. Those who think that the probability of AI-risk is low, might still think that its high enough to overshadow nearly all other causes, because the negative impact is so high. AI-risk would be unproven, but its philanthropic value proven to that person.

As comments on your posts indicate, MIRI and its supporters are quite convinced.

The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don't find them to be good objections to your framework.

I think there's something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There's dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren't worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments.

But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably.

~

That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target?

I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense.

~

a higher quality response than calling the AI-safety conclusion commonsense.

To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.

Assuming that we don't discount future (or not currently existent) people, surely ripple effects far outweigh the lives directly saved. But many other actions have ripple effects too. Can anyone "prove" that the ripple effects of donating to the cause of saving lives in developing countries is more positive than, for example, improving US education? If not, why target current donations to charities that are able to prove that they are effective at directly saving lives? This would be equivalent to education charities proving that they are effective at raising students' grades, which I think you'd agree would not by itself warrant donating to those charities.

What I want my main point to be, upon further reflection, is that If we find ourselves in a situation where we don't know enough to know what's the most effective, the proper reaction is not to pursue impact, but instead find ways to reduce our uncertainty.

So if we don't know about which has more ripple effects, we should invest in finding out, not pick at random.

So if we don't know about which has more ripple effects, we should invest in finding out, not pick at random.

I would agree that currently, we should invest in finding out, not pick at random, but we're likely to never achieve an understanding of ripple effects on par with our understanding of how well malaria nets or deworming efforts work, so if that's the bar you're setting (which you seem to be doing based on the post), then we'll never actually pick any object-level cause to support.

on par with our understanding of how well malaria nets or deworming efforts work, so if that's the bar you're setting (which you seem to be doing based on the post)

That's not what I'm saying. I actually intended my essay to argue/clarify against that.

One possible issue here is that just because a cause isn't "provable", it still may be worth doing if it has a good chance of having a positive effect.

For an example, say you're running a foundation and have to decide how much money to give to mosquito nets now, and how much to invest in research to develop a malaria vaccine. Of course it's not "proven" that trying to develop a malaria vaccine is going to help people, and it can't be proven until it succeeds, but it is quite possible that it might. And if it does, it is likely to have a significantly larger effect then nets.

In other words; provability is certainty one issue to consider, but it's not the only one.

My goal in donating would be to first ensure the survival of all promising organizations (make sure they have enough funding to stay around) and then try to buy information from promising organizations as much as I can.

This sounds like a goal for a wealthy philanthropist. For those of us who don't have enough money to ensure the survival of even one promising organization, but can only make a marginal impact, what do you recommend?

I'm by no means wealthy myself (I'm a student!), but one could pool resources with other people. Even if you don't have enough resources to ensure the survival of a promising organization, you might still be able to help buy information.

[-][anonymous]11y00

Instead, we should recognize that being "proven" is not a binary yes or no, but rather a sliding scale.

Or, to use Popper's model of falsifiability, we could say proven is a binary of no and maybe. Maybe is worth investigating and improving, no is no. If a charity cannot express how they know they have failed, I hesitate to trust they know they have succeeded. It can be a simple thing to express negation (if this soup kitchen does not give away X bowls of soup in Y period of time, we have failed and therefore Z) especially if the bar is set low, but I have never seen a charity do so.

My theory: People want to back the strong horse so they avoid charities that say they might not be 100% successful. Charities talk mostly of how the world fails a cause (therefore them) and not how they might fail. No exit strategy, too pure of heart to fail. This also explains mission creep in charities: if they succeed, they fail to have a reason to exist and so must now adopt cause P and Q as well as R, because (now) they are all connected. And perhaps a bit of self-importance / parent shaming: this agency / generation is going to end homelessness ('cuz you other / older guys were too mean or too dumb to do so).