Robin Hanson wrote, five years ago:
Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.
So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them? How can you think anyone on Earth so cares? And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.
So why do many people seem to care about policy that effects far future folk? I suspect our paternalistic itch pushes us to control the future, rather than to enrich it. We care that the future celebrates our foresight, not that they are happy.
In the comments some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.
Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by
3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999 fold
Which is way less than 10^52
You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things.
Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me.
And if you are not in a rush, read this also, for a bright reflection on similar issues.
I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs
First things:
Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.
(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)
Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.
The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). Not to mention the Vatican States or its holdings elsewhere. The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to war and extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.
And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).
Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.
So to recap:
Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.
Hi gwern, thanks for the reply.
I think you might be misunderstanding my points here. In particular, regarding point 2, I'm not suggesting that the waqfs split, or that anything at all like that might have happened. The “split waqfs” point is just meant to illustrate the fact that, when waqf failures are correlated for whatever reason, arbitrarily many closures with zero long-term survivors can be compatible with a relatively low annual hazard rate. The failure of a billion waqfs would be a valid observation, but it would be an observatio... (read more)