Why is it rational to invest in retirement? I don't get it.
I know I said I'd be gone... but this was just a comment originally, and I noticed it may actually be relevant.
Elharo said in Munchkin Ideas:
Put as much money as you can afford into tax advantaged retirement accounts. In the U.S. that means 401K, 403b, IRA, SEP, etc.
I'm interested in the following:
Why should people invest in retirement? Or, instead, why should someone invest as much as most do in retirement.
Few facts that make it a boggling question for me:
You are 10% to 20% likely to die before you enjoy even your first retirement year.
People adjust much more to harsh economical conditions than they believe they would. They remain happy, as many studies by Seligman and others show.
People who retire are only happier as retirees if they retired by choice (I lost the paper, sorry).
Most people here live in rich countries - darn, hate to be the exception! - , and their state would happily provide them with at least the maximal retirement plan legal in my country (aprox 2000 dollars/month). And surely would provide them with double the minimal (about 200/month) if they needed.
If you have descendants, they may support you in case you are still alive, and if you are not rich enough to keep a house, you have a good excuse to be in company of loved ones (you have nowhere else to go).
Last, but not least: That person is not even you that much anyway.
Given all that, I have no clue what the whole fuss about retirement plans, and being 60% of a rich old person with a crappy body is all about, specially if you are in the grave.
I mean, in the cryopreservation chamber, of course.
Edit: A related question not worth its own post, but maybe worth discussing, is Should inheritance "jump" a generation. Everyone inheriting from grandparents, instead of parents? Just the abstract ethical question. Regardless of implementation procedure.
A Rational Altruist Punch in The Stomach
Robin Hanson wrote, five years ago:
Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.
So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them? How can you think anyone on Earth so cares? And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.
So why do many people seem to care about policy that effects far future folk? I suspect our paternalistic itch pushes us to control the future, rather than to enrich it. We care that the future celebrates our foresight, not that they are happy.
In the comments some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.
Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by
3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999 fold
Which is way less than 10^52
You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things.
Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me.
And if you are not in a rush, read this also, for a bright reflection on similar issues.
Let's make a "Rational Immortalist Sequence". Suggested Structure.
Why Don't Futurists Try Harder to Stay Alive?, asks Rob Wiblin at Overcoming Bias
Suppose you want to live for more than 10 thousand years. (I'll assume that suffices for the "immortalist" designation). Many here do.
Suppose in addition that this is by far, very far, your most important goal. You'd sacrifice a lot for it. Not all, but a lot.
How would you go about your daily life? In which direction would you change it?
I want to examine this in a sequence, but I don't want to write it on my own, I'd like to do it with at least one person. I'll lay out the structure for the sequence here, and anyone who wants to help, by writing an entire post (these or others), or parts of many, please contact me in the comments, or message. Obviously we don't need all these posts, they are just suggestions. The sequence won't be about whether it is a good idea to do that. Just assume that the person wants to achieve some form of Longevity Escape Velocity. Take as a given that it is what an agent wants, what should she do?
1) The Ideal Simple Egoistic Immortalist - I'll write this one, the rest is up for grabs.
Describes the general goal of living long, explains it is not about living long in hell, about finding mathy or Nozickian paradoxes, about solving the moral uncertainty problem. It is just simply trying to somehow achieve a very long life worth living. Describes the two main classes of optimization 1)Optimizing your access to the resources that will grant immortality 2)Optimizing the world so that immortality happens faster. Sets "3)Diminish X-risk" aside for the moment, and moves on with a comparison of the two major classes.
2) Everything else is for nothing if A is not the case -
Shows the weaker points (A's) of different strategies. What if uploads don't inherit the properties in virtue of which we'd like to be preserved? What if cryonics facilities are destroyed by enraged people? What if some X-risk obtains, you die with everyone else? What if there is no personal identity in the relevant sense and immortality is a desire without a referent (a possible future world in which the desired thing obtains)? and as many other things as the poster might like to add.
3) Immortalist Case study - Ray Kurzweil -
Examines Kurzweil strategy, given his background (age, IQ, opportunities given while young etc...). Emphasis, for Kurzweil and others, on how optimal are their balances for classes one and two of optimization.
4) Immortalist Case study - Aubrey de Grey -
5) Immortalist Case study - Danila Medvdev -
Danila has been filming everything he does hours a day. I don't know much else, but suppose he is worth examining.
6) Immortalist Case study - Peter Thiel
7) Immortalist Case study - Laura Deming
She's been fighting death since she was 12, went to MIT to research on it, and recently got a Thiel fellowship and pivoted to fundraising. She's 20.
8) Immortalist Case study - Ben Best
Ben Best directs Cryonics Institute. He wrote extensively on mechanisms of ageing, economics and resource acquisition, and cryonics. Lots can be learned from his example.
9) Immortalist Case study - Bill Faloon
Bill is a long time cryonicist, he founded the Life Extension Foundation decades ago, and to this day makes a lot of money out of that. He's a leading figure in both the Timeship project (super-protected facility for frozen people) and in gathering the cryonics youth togheter.
10) How old are you? How much are you worth? How that influences immortalist strategies. - This one I'd like to participate.
11) Creating incentives for your immortalism - this one I'll write
How to increase the amount of times that reality strikes you with incentives that make you more likely to pursue the strategies you should pursue, being a simple egoistic immortalist.
12, 13, 14 .... If it suits the general topic, it could be there. Also previous posts about related things could be encompassed.
Edit: The suggestion is not that you have to really want to be the ideal immortalist to take part in writing a post. My goals are far from being nothing but an immortalist. But I would love to know, were it the case, what should I be doing? First we get the abstraction. Then we factor in everything else about us and we have learned something from the abstraction.
Seems people were afraid that by taking part in the sequence they'd be signalling that their only goal is to live forever. This misses both the concept of assumption, and the idea of an informative idealized abstraction.
What I'm suggesting we do here with immortality could just as well be done with some other goal like "The Simple Ideal Anti-Malaria Fighter" or "The Simple Ideal Wannabe Cirque de Soleil".
So who wants to play?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)