Comment author: 01 April 2013 01:12:15PM 40 points [-]

Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat.

It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53.

Comment author: 02 March 2014 05:45:51AM 0 points [-]

As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth.

Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc).

Comment author: 01 April 2013 05:51:30PM *  44 points [-]

But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100.

I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs

First things:

Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.

(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)

Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.

The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.

And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).

Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.

So to recap:

1. organizational mortality is extremely high
2. financial mortality is likewise extremely high; and both organizational & financial mortality are relevant
3. all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large
4. risks for organizations or finances increases with size
5. opportunity cost is completely ignored

Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.

Comment author: 02 March 2014 05:38:30AM *  0 points [-]

So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240.

The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability.

ETA: Weitzman on uncertainty about discount/expropriation rates.

Comment author: 31 January 2014 06:21:13AM *  0 points [-]

And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data - you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme - preceding the cold war, you have Hitler). It's the exact same mistake committed by particularly low brow republicans when they go on about "ha ha, global warming" during a cold spell - because they think that a trend in noisy data has huge impact on individual data points.

edit: furthermore, Pinker's data is on violence per capita - the total violence increased, it's just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.

Comment author: 01 February 2014 06:29:16PM *  2 points [-]

Pinker's data is on violence per capita - the total violence increased, it's just that the violence seems to scale sub-linearly with population.

Did you not read the book? He shows big declines in rates of wars, not just per capita damage from war.

Comment author: 30 January 2014 12:14:46AM *  -1 points [-]

A rogue superpower - may I use this oxymoron? - could attack 400 existing nuclear reactors and nuclear waste stores with its missiles creating fallout equal to doomsday machine.

Keep in mind that in a nuclear war, even if the nuclear reactors are not particularly well targeted, many (most?) reactors are going to melt down due to having been left unattended, and spent fuel pools may catch fire too.

@Carl:

I think you dramatically under-estimate both the probability and the consequences of the nuclear war (by ignoring the non-small probability of massive worsening of the political relations, or reversal of tentative trends of less warfare).

That's quite annoying to see, the self proclaimed "existential risk experts" (professional mediocrities) increasing the risks through undermining and under-estimating things that are not fancy pet causes from the modern popular culture. Leave it to the actual scientists to occasionally give their opinions about, please, they're simply smarter than you.

Comment author: 30 January 2014 06:47:33AM 2 points [-]

I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker's assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.

I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can't update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts' views I shared it.

Comment author: 11 January 2014 11:43:46PM *  2 points [-]

I don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference:

Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared.

Edit: Fixed silly typo.

Comment author: 12 January 2014 01:31:53AM *  9 points [-]

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

• Life insurance, and cryonics membership fees, are paid on an annual basis
• The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
• You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
• If you die in a year before 'immortality' becomes available, then it does not help you

• You have a 10% chance of dying before 40 years have passed
• During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because of membership fees not being scaled to mortality risk)
• After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
• In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10
Comment author: 11 January 2014 08:15:01PM 6 points [-]

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

Comment author: 11 January 2014 09:20:58PM *  10 points [-]

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

Comment author: 11 January 2014 04:03:03AM *  1 point [-]

in http://intelligenceexplosion.com/2012/engineering-utopia/ you say "There was once a time when the average human couldn’t expect to live much past age thirty."

this is false, right?

(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't expect to live to >30") . or just "can expect to live to 60" if you too don't count infants as really human)

Comment author: 11 January 2014 10:06:28AM 0 points [-]

Life expectancy used to be very low, but it was driven by child and infant mortality more than later pestilence and the like.

Comment author: 06 January 2014 02:24:55AM *  3 points [-]

This still feels like a "we need fifty Stalins" critique.

For me the biggest problems with the effective altruism movement are:

1: Most people aren't utilitarians.

2: Maximizing QALY's isn't even the correct course of action under utilitarianism - its short sighted and silly. Which is worse under utilitarianism: Louis Pasteur dying in his childhood or 100,000 children in a third world country dying? I would argue that the death of Louis Pasteur is a far greater tragedy since his contributions to human knowledge have saved a lot more than 100,000 lives and have advanced society in other ways. But a QALY approach does not capture this. That's extreme obviously, but my issue is that all lives are not equal. People in developed countries matter way more than people in developing countries in terms of advancing technology and society in general.

Comment author: 06 January 2014 05:54:29AM *  1 point [-]

Maximizing DALY's isn't even the correct course of action under utilitarianism

That's an understatement! DALYs are defined as intrinsically bad: one DALY is the loss of one year of healthy life relative to a reference lifespan, or equivalent morbidity. QALYs are the good ones that you want to increase.

Comment author: 06 January 2014 02:31:29AM 2 points [-]

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit.

I generally agree. But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life". Another problem is that I don't think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.

~

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

I don't know how much variation there is. I don't claim to know a representative sample of EAs. But I do think there's not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.

Which positions are you thinking of?

Comment author: 06 January 2014 05:44:43AM *  8 points [-]

But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life".

You still have to answer questions like:

• "I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?"
• "I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?"
• "By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?"

Those choices imply judgments about expected value. Being evasive and vague doesn't eliminate the need to make such choices, and tacitly quantify the relative value of options.

Being vague can conceal one's ignorance and avoid sticking one's neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.

It's still important to be clear on how noisy different inputs to one's judgments are, to give confidence intervals and track records to put one's analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn't help.

Comment author: 05 January 2014 02:59:52PM 18 points [-]

I'm glad to see more of this criticism as I think it's important for reflection and moving things forward. However, I'm not really sure who you're critiquing or why. My response would be that your critique (a) appears to misrepresent what the "EA mainstream" is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.

~

The EA Mainstream

Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

I imagine we know different people, even within the effective altruist community. So I'll believe you if you say you know a decent amount of people who think "earning to give" is the best instead of a baseline.

However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called "Why Earning to Give is Often Not the Best Option" and say "A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.".

Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).

~

Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe?

I'm not quite sure what you mean by this:

If you're asking "why don't more people work in start-ups?", I don't think EAs are avoiding start-ups in any noticeable way. I'll be working in one, I know several EAs who are working in them, and it doesn't seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.

If you're asking "why don't more people start start-ups themselves?", I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like "Should More Altruists Consider Entreprenuership?" and "Salary or Start-up: How Do Gooders Can Gain More From Risky Careers". Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse -- that is a valid critique -- but I don't think it's unique to the EA movement or we can do much about it.

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

~

We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to).

I think the EA mainstream would agree with you on this one as well -- GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

~

And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don't think everyone has the time, skills, or willingness to do this -- or that they even should. I think this ignores the benefits of some specialization of trade.

Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it's hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.

~

It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.

If you're already raising a family and working a full-time job and donating 10%, I think in many cases it's not worth quitting your job or using your free time to look for more opportunities. We don't need absolutely everyone doing this search -- there's comparative advantage considerations here too.

~

Outright Disagreement

How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don't), that's still significant.

And whether that's condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).

~

Quantitative estimates often also tend to ignore flow-through effects: [...] These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account.

But can it? How do you know? I think you should take your own "research over speculation" advice here. I don't think we understand flow through effects well enough yet to know if they can be reliably intuited.

~

Outright Agreement

an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. [...] It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research).

I agree this is an unfortunate problem.

~

Conclusion

Lest this essay give a mistaken impression to the casual reader, I should note that there are many exemplary effective altruists who I feel are mostly immune to the issues above

This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream) or you're placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.

Comment author: 05 January 2014 06:19:14PM *  26 points [-]

GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one's assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy "list of considerations pro and con" there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.

In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.

Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.

It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream)

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

View more: Next