Comment author: gwern 01 April 2013 05:51:30PM *  49 points [-]

But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100.

I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs

First things:

Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.

(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)

Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.

The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.

And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).

Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.

So to recap:

  1. organizational mortality is extremely high
  2. financial mortality is likewise extremely high; and both organizational & financial mortality are relevant
  3. all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large
  4. risks for organizations or finances increases with size
  5. opportunity cost is completely ignored

Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.

Comment author: CarlShulman 02 March 2014 05:38:30AM *  0 points [-]

So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240.

The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability.

ETA: Weitzman on uncertainty about discount/expropriation rates.

Comment author: NoSuchPlace 11 January 2014 11:43:46PM *  2 points [-]

I don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference:

Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared.

Edit: Fixed silly typo.

Comment author: CarlShulman 12 January 2014 01:31:53AM *  9 points [-]

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

  • Life insurance, and cryonics membership fees, are paid on an annual basis
  • The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
  • You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
  • If you die in a year before 'immortality' becomes available, then it does not help you

So, in your scenario:

  • You have a 10% chance of dying before 40 years have passed
  • During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because there is some frontloading, e.g. membership fees not being scaled to mortality risk)
  • After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
  • In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10
Comment author: gjm 11 January 2014 08:15:01PM 6 points [-]

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

Comment author: CarlShulman 11 January 2014 09:20:58PM *  10 points [-]

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

Comment author: alicey 11 January 2014 04:03:03AM *  2 points [-]

in http://intelligenceexplosion.com/2012/engineering-utopia/ you say "There was once a time when the average human couldn’t expect to live much past age thirty."

this is false, right?

(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't expect to live to >30") . or just "can expect to live to 60" if you too don't count infants as really human)

Comment author: CarlShulman 11 January 2014 10:06:28AM 0 points [-]

Life expectancy used to be very low, but it was driven by child and infant mortality more than later pestilence and the like.

Comment author: pianoforte611 06 January 2014 02:24:55AM *  3 points [-]

This still feels like a "we need fifty Stalins" critique.

For me the biggest problems with the effective altruism movement are:

1: Most people aren't utilitarians.

2: Maximizing QALY's isn't even the correct course of action under utilitarianism - its short sighted and silly. Which is worse under utilitarianism: Louis Pasteur dying in his childhood or 100,000 children in a third world country dying? I would argue that the death of Louis Pasteur is a far greater tragedy since his contributions to human knowledge have saved a lot more than 100,000 lives and have advanced society in other ways. But a QALY approach does not capture this. That's extreme obviously, but my issue is that all lives are not equal. People in developed countries matter way more than people in developing countries in terms of advancing technology and society in general.

Comment author: CarlShulman 06 January 2014 05:54:29AM *  1 point [-]

Maximizing DALY's isn't even the correct course of action under utilitarianism

That's an understatement! DALYs are defined as intrinsically bad: one DALY is the loss of one year of healthy life relative to a reference lifespan, or equivalent morbidity. QALYs are the good ones that you want to increase.

Comment author: peter_hurford 06 January 2014 02:31:29AM 2 points [-]

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit.

I generally agree. But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life". Another problem is that I don't think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.

~

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

I don't know how much variation there is. I don't claim to know a representative sample of EAs. But I do think there's not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.

Which positions are you thinking of?

Comment author: CarlShulman 06 January 2014 05:44:43AM *  9 points [-]

But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life".

You still have to answer questions like:

  • "I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?"
  • "I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?"
  • "By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?"

Those choices imply judgments about expected value. Being evasive and vague doesn't eliminate the need to make such choices, and tacitly quantify the relative value of options.

Being vague can conceal one's ignorance and avoid sticking one's neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.

It's still important to be clear on how noisy different inputs to one's judgments are, to give confidence intervals and track records to put one's analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn't help.

Comment author: peter_hurford 05 January 2014 02:59:52PM 20 points [-]

I'm glad to see more of this criticism as I think it's important for reflection and moving things forward. However, I'm not really sure who you're critiquing or why. My response would be that your critique (a) appears to misrepresent what the "EA mainstream" is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.

~

The EA Mainstream

Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

I imagine we know different people, even within the effective altruist community. So I'll believe you if you say you know a decent amount of people who think "earning to give" is the best instead of a baseline.

However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called "Why Earning to Give is Often Not the Best Option" and say "A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.".

Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).

~

Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe?

I'm not quite sure what you mean by this:

If you're asking "why don't more people work in start-ups?", I don't think EAs are avoiding start-ups in any noticeable way. I'll be working in one, I know several EAs who are working in them, and it doesn't seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.

If you're asking "why don't more people start start-ups themselves?", I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like "Should More Altruists Consider Entreprenuership?" and "Salary or Start-up: How Do Gooders Can Gain More From Risky Careers". Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse -- that is a valid critique -- but I don't think it's unique to the EA movement or we can do much about it.

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

~

We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to).

I think the EA mainstream would agree with you on this one as well -- GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

~

Comparative Advantage

And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don't think everyone has the time, skills, or willingness to do this -- or that they even should. I think this ignores the benefits of some specialization of trade.

Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it's hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.

~

It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.

If you're already raising a family and working a full-time job and donating 10%, I think in many cases it's not worth quitting your job or using your free time to look for more opportunities. We don't need absolutely everyone doing this search -- there's comparative advantage considerations here too.

~

Outright Disagreement

How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don't), that's still significant.

And whether that's condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).

~

Quantitative estimates often also tend to ignore flow-through effects: [...] These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account.

But can it? How do you know? I think you should take your own "research over speculation" advice here. I don't think we understand flow through effects well enough yet to know if they can be reliably intuited.

~

Outright Agreement

an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. [...] It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research).

I agree this is an unfortunate problem.

~

Conclusion

Lest this essay give a mistaken impression to the casual reader, I should note that there are many exemplary effective altruists who I feel are mostly immune to the issues above

This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream) or you're placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.

Comment author: CarlShulman 05 January 2014 06:19:14PM *  27 points [-]

GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one's assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy "list of considerations pro and con" there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.

In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.

Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.

It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream)

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

Comment author: ChrisHallquist 31 December 2013 04:56:24AM *  24 points [-]

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Indeed, I'm not sure I'd know about Taubes at all if not for the LessWrong community.

I've already mentioned Eliezer's "Correct Contrarian Cluster" as an example in another thread, but perhaps it would be helpful to mention other examples:

  • In a thread where someone asked what the evidence in favor of paleo was, Taubes was the main concrete source that came up. Specifically, Luke mentioned Taubes as the person he's "usually" referred to on this question, without taking a stand himself and saying he didn't have time to evaluate the evidence personally.
  • Sarah Constantin (commenter at Yvain's blog, author of reply to Yvain's non-libertarian FAQ, and I just learned a MetaMed VP) has cited Taubes a couple times partly to make a libertarian point.
  • Jack bringing up Taubes in offline conversation
  • Yvain's old blog had a review of Taubes which doesn't seem to be public right now, but which I remember as partly criticizing Taubes but also lauding him for things that now I don't think Taubes deserves credit for.

So Taubes was someone I could expect to see cited in the future when the issue of expert consensus gets discussed on LessWrong. In spite of all the people who didn't like these posts, I think I may have accomplished the goal of getting people to stop citing Taubes.

Comment author: CarlShulman 01 January 2014 12:22:05AM 10 points [-]

Taubes is now involved in an initiative with the Arnold Foundation doing randomized nutrition trials. It would be interesting to make predictions about some of those.

In response to comment by CarlShulman on Why CFAR?
Comment author: Benquo 29 December 2013 02:44:20AM *  13 points [-]

On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:

  • CFAR
  • 80,000 Hours / CEA
  • GiveWell
  • Leverage Research

Whom am I leaving out?

My model for what they're doing is this:

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

Do any of these descriptions seem off? If so, how?

PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.

In response to comment by Benquo on Why CFAR?
Comment author: CarlShulman 29 December 2013 03:40:07AM *  20 points [-]

Do any of these descriptions seem off? If so, how?

Some comments below.

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.

Separately, how much of a difference is there between a three-day CFAR workshop and a temporary "rationality infusion"?

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality,

The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).

but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

As the post mentions, there isn't clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

I don't know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage's THINK.

They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.

In response to comment by peter_hurford on Why CFAR?
Comment author: Benquo 28 December 2013 09:35:47PM *  13 points [-]

I can give you a proof of concept, actual numbers and examples omitted.

Considered a simplified model where there are only two efficient charities, a direct one and CFAR, and no other helping is possible. If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.

In addition CFAR is explicitly trying to build a network of competent rational do-gooders, with the expectation that the gains will be more than linear, because of division of labor.

Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.

In response to comment by Benquo on Why CFAR?
Comment author: CarlShulman 29 December 2013 12:37:44AM *  19 points [-]

CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can's growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).

That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the "E"/ element as well as the "A", which something along the lines of CFAR may be especially helpful for).

ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.

View more: Prev | Next