There aren't strict guidelines, but if something isn't much upvoted and/or doesn't seem very important, I'll move it to Discussion. Trying to post to Main is not a crime. On the other hand, moving things back from Discussion to Main after an editor moves them is a crime.
What about the reverse? Moving from discussion to Main once the author notices that not only his introspective evidence says the text is good, but also others?
But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100.
I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs
First things:
Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.
(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)
Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.
The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.
And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).
Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.
So to recap:
- organizational mortality is extremely high
- financial mortality is likewise extremely high; and both organizational & financial mortality are relevant
- all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large
- risks for organizations or finances increases with size
- opportunity cost is completely ignored
Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.
I have some trouble conceiving of what would beat a consistent argument a googol fold.
Now I don't anymore.
I stand corrected.
Thank you Gwern.
The Unintuitive Power Laws of Giving
Unintuitive? Are the intuitions of your expected audience really so poorly calibrated?
(I was expecting something different from the title.)
I think he meant unintuitive in the sense of "not accessible by human intuition, type 1, fast thinking" not "hard to grasp upon reflection by my intended audience"
This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim.
Also, the most recent comment by cournot is interesting on the topic:
You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative wealth you get a different figure [and standard price adjustments aren't great for looking far back in the past]. I think a pound was about 5 dollars. So if we assume that 1000 pounds = 5000 nominal dollars and we use the Econ History's price deflators http://www.measuringworth.com/uscompare/ we find that this comes to over $2M if we use the unskilled wage and about $5M if we use nominal GDP. As a relative share of GDP, this figure would have been an enormous $380M or so. The latter is not an irrelevant calculation.
Given how wealthy someone had to be (relative to the poor in the 18th century) to fork over a thousand pounds in Franklin's time, he might have done more good with it then than you could do with 2 to 5 million bucks today.
That is unreasonable because we have more access to means of helping the poor today. If you expect the trend to go on into the future, than 2 million tomorrow is always better than a thousand today, which approximates maximal 3 lives on AMF of SCI
A Rational Altruist Punch in The Stomach
Robin Hanson wrote, five years ago:
Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.
So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them? How can you think anyone on Earth so cares? And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.
So why do many people seem to care about policy that effects far future folk? I suspect our paternalistic itch pushes us to control the future, rather than to enrich it. We care that the future celebrates our foresight, not that they are happy.
In the comments some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.
Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by
3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999 fold
Which is way less than 10^52
You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things.
Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me.
And if you are not in a rush, read this also, for a bright reflection on similar issues.
Edit: This post is an argument against the conjunctive truth of two things, Many Worlds, and the way in which we think of What Matters. It seems that the most natural interpretation of it is that Many Worlds is true, and thus my argument is against our notion of What Matters. In fact my position lies more in the opposite side - our notion of What Matters is (strongly related to) What Matters, so Many Worlds are less likely.
Downvoted based on your edit. Your preferences have no bearing on how the multiverse is, one way or another. Setting up a dichotomy like this is a mistake. To the extent that you care about physics and metaphysical theories you should instead work out how to describe your preferences in such models in a way that adds up to normal.
That is the first time I see you saying something that doesn't strike me as reasonable, and I've been a lurker for a long time.
Which indicates that I didn't understand you.
Could you please clarify what do you mean by "is" when you say "how the multiverse is"?
For me it seems that we (humans) can talk about this multiverse thing. We can say stuff about other universes, like "they are epiphenomenal" or "they matter". It is hard for me to just say "they are" or "they exist" and truly think that I know what I mean by that. It feels like I'm saying "they emerge" or "they magic".
what matters
To whom?
BTW, following Gary Drescher in Good and Real, I think of “real” as an indexical, i.e. something is real if it's causally connected with the speaker, or with something that's causally connected with the speaker, or with something that's causally connected with something that's causally connected with the speaker, etc. And as far as I can introspect, I only care about things that are real in this sense.
You should say you are following David Lewis I suppose.
I'm confused. From your posts I get an impression that you take "existence of many worlds" seriously, yet from your comments it seems like you don't give this untestable much credence. Which is it?
The latter, which I was clarifying in an edit to the original post as you asked.
I still think it is productive to instrumentally talk of Many Worlds, to see which concepts break.
Still, it seems that you remain secure about the concepts that I'm doubting play a role under some considerations do play a role.
Not sure what you mean here.
If you are secure about the role that "existence" plays in moral discussion, please clarify it.
I prefer not to use the term "existence" at all, people have an intuitive idea of what it means, but they tend to disagree a lot when trying to formalize it.
One way of doing that is by describing a function where on one axis you have different theories about many-worlds as the ones I described in my previous post, and in the other axis you have what exists given our epistemic evidence if that theory turns out to be correct.
I don't find the notion of many worlds useful at all, so your suggested description does not work for me. The closest I come to many worlds is the decision-theoretic "possible worlds", i.e potential outcomes resulting from one's potential actions, over which one either computes some sort of utility function or to which one applies deontological shortcuts. This explicitly excludes all the imaginable worlds you have no influence over, such as the "far worlds" you seem to be preoccupied with.
Fair enough. So basically if my post was trying to immunize readers, you'd be immune already.
I agree that people should refrain from using the word 'existence'. If they are many worlds supporters, I think they still need some work done, that the concept of existence was attempting to do, but I claimed here fails to.
If, like you, they are not many-world supporters, then 'existence' only means causally connected to me. And the word can be avoided without paying any price by saying its equivalent.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I have been known to do that as well.
you read all posts?