I think this is an important question and am glad to see it brought up. I think it is a question that requires a good amount of Taking Ideas Seriously.
Zvi discusses it here. My interpretation/summary of his answer: "You should still save up money because there are some future scenarios where having a pile of money will be useful." I would have liked to see more elaboration. What future scenarios? How likely are they? What about the question of retirement?
Personally, here's how I think about it.
I'm not sure how likely each of these scenarios are. If we have FOOM, it does seem like 1a or 1b will happen rather than 1c or 1d. It's a hard thing to think about though and I have significant uncertainty.
But for me, the dominant consideration is probably that I want a sense of normalcy. Saving for retirement helps with that and for me personally doesn't have a large downside.
I also think what Zvi says makes sense. Having a pile of money seems wise for reasons other than retirement.
I like the topic, but I think this is bad advice and almost certainly wrong.
I'll ignore the saving vs investing distinction. I assume you meant investing, with a Kelly Criterion or similar risk/reward strategy.
But your main error is in thinking that you can't profitably use your saved money sooner, if circumstances warrant. Investing with a 10- or a 40-year horizon is darned near identical. You're really saving for 'the foreseeable future'. You're asserting that what you can get for $1-plus-future-growth averaged over all futures is higher utility than $1 spent today.
You're also just wrong about happiness correlation with income - there are population effects that make it confusing, but it never seems to fully plateau, just becomes smaller increments. More is (nearly) always better. I think you're wrong abut utility-per-dollar as well, but I have weaker models of that.
The stock market has never declined over a 20 year period, but it has declined over 10 year periods, so if you're particularly risk averse, that could be quite the difference.
You're putting a lot of importance on the "likely" here:
Superintelligence is likely to arrive within the next 20 years
What if it doesn't? Reaching retirement age with no savings sucks pretty badly (especially if you're used to spending your entire income). Short AI timelines might push you in the direction of smaller retirement savings (taking a risk of things being a little worse in timelines you think are unlikely), but probably not all the way to "don't save for retirement". You should also put at least some weight on AGI happening and somehow the world still existing in a recognizable form (since this is what has happened every other time a world-changing technology was created).
Something else to consider is that retirement isn't the only reason to have savings. If you think AI timelines are short, you might want to have a giant pile of money you can strategically deploy to do things like quit your job and work on alignment for a few years (or pay someone else to).
An aside on the title: Only say 'rational' when you can't eliminate the word. More detail on the concept: Rationality: Appreciating Cognitive Algorithms.
First of all, it's entirely possible that "superintelligence" will not be this monolithic single sovereign entity that does what it wants and it chooses whether to kill all humans. (So future where it kills humans, your savings don't matter, and futures where it makes earth a utopia, they don't matter)
There are many futures you are not considering including some that I personally think hold more probability mass.
Due to 1-3 there is some reason for retirement funds. And then there's the big one:
The obvious way to become the wealthiest company on earth is the following steps:
All these experiments are providing the structured information to train the ASI to do its job
Set up shop in a country with a compatible regulatory regime and start offering cures for most diseases and aging. Each patient gets examined by western doctors unaffiliated with your company and their medical files added to a Blockchain before and after so there can be no doubt of the treatment outcomes.
Patients pay a percentage of net assets, while there care is delivered primarily via robotics driven by ASIs, there are some human doctors checking the sanity on the ASIs actions and finite capacity at first. The wait list is sorted by both severity and wealth.
Number 6 means there would be a time period where having money might help save your life. Whether that time window is 1 year or 20 I do not know. Obviously eventually western regulatory regimes will fold but I don't know how long that will take. (They have to fold, the above ultimately is "use whatever chemical compound the ASI thinks works in this scenario. The "procedure " is "do whatever the ASI thinks should be done next". The ASI may invent novel drugs while a specific patient is dying, change a surgical procedure upon discovering variant anatomy no surgeon has ever seen, and so on. It's practicing medicine like stockfish, humans will need some time to even understand the reason a move was made.
I think this is bad advice for a number of reason. Previous commenters hit several of the main ones, but to add:
Happiness has been shown to increase with income up to a certain threshold ($ 200K per year now, roughly speaking), beyond which the effect tends to plateau.
Do you have a citation for this? My understanding is that it's a logarithmic relationship — there's no threshold. (See the Income & Happiness section here.)
Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.
Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all.
It's always something. Now it's AGI. Maybe it'll kill us. Maybe it'll usher in utopia, or transform us into gods via a singularity.
Maybe. But based on the record to date, it's not the way to bet.
Whatever you think the world is going to be like in 20 years, you'll find it easier to deal with if you're not living hand-to-mouth. If you find it difficult to save money, it's very tempting to find an excuse to not even try. Don't deceive yourself.
"... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." --Edward Gibbon, 'Decline and Fall of the Roman Empire'
Added: I do think Bohr was wrong and Everett (MWI) was right.
So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.
And in many of those worlds, you'll be wanting something to live on in your retirement.
I've thought on this additional axiom, and it seems to bend the reality too much, leading to possible [unpleasant outcomes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes): for example, where a person survives but is tortured indefinitely long.
Also, it's unclear how could this axiom manage to preserve ratios of probabilities for quantum states.
Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head.
I have a question, how did you come to know this, especially as a repeatable pattern? I'd really like to know this, because this sounds like one of the more interesting arguments against AI being impactful at all.
I dont think he's trying to say AI wont be impactful, obviously it will, just that trying to predict it isn't an activity that one ought apply any surety to. Soothsaying isn't a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI. We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.
You could buy a "target retirement 2040" fund if you expect the eschaton around 2043 and you want a little time before then to relax. If you are 20 that is different to the "target retirement 2075" you might otherwise get.
Diligent people in their 20s are probably better off diligently increasing their ability to be happy and productive and useful, rather than their retirement savings. Mostly the benefit of early retirement savings is to get into the habit and to make your financial mistakes while you don't yet have much money to lose.
The future appears unusually uncertain this century and insuring against various negative outcomes, to the extent that can be done simply, is probably best.
Generically, having more money in the bank gives you more options, being cash-constrained means you have fewer options. And, also generically, when the future is very uncertain, it is important to have options for how to deal with it.
If how the world currently works changes drastically in the next few decades, I'd like to have the option to just stop what I'm doing and do something else that pays no money or costs some money, if that seems like the situationally-appropriate response. Maybe that's taking some time to think and plan my next move after losing a job to automation, rather than having to crash-train myself in something new that will disappear next year. Maybe it's changing my location and not caring how much my house sells for. Maybe it's doing different work. Maybe it's paying people to do things for me. Maybe it's also useful to be invested in the right companies when the economy goes through a massive upswing before the current system collapses, so I for a brief time have a lot of wealth and can direct it towards goals that are aligned with my values rather than someone else's, thus, index funds that buy me into a lot of companies.
Even if we eventually get to a utopia, the path to that destination could be rocky, and having some slack is likely to be helpful in riding that time out.
Another form of slack is learning to live on much less than you make - so the discipline required to accumulate savings, could also pay off in terms of not being psychologically attached to a lifestyle that stops you from making appropriate changes as the world changes around you.
Of course "accumulate money so you have options when the world changes" is a different mindset than "save money so you can go live on a beach in 40 years". But money is sort of like fungible power, an instrumentally useful thing to have for many different possible goals in many different scenarios, and a useless thing to have in only a few.
Side note: "the amount a dollar can do goes up, the value of a dollar collapses" strikes me as implausible. Your story for how that could happen is people hit a point of diminishing returns in terms of their own happiness... but there are plenty of things dollars can be used for aside from buying more personal happiness. If things go well, we're just at the start of earth-originating intelligence's story, and there are plenty of ways for an investment made at the right time to ripple out across the universe. If I was a trillionaire (or a 2023-hundred-thousandaire where the utility of a dollar has gone up by a factor of 10 million, whatever), I could set up a utopia suited to my tastes and understanding of the good, for others, and that seems worth doing even if my subjective day-to-day experience doesn't improve as a result. As just one example. In any case, being at the beginning of a large expansion in the power of earth-originating intelligence, seems like just the sort of time when you'd like to have the ability to make a careful investment.
The logical assumption is that we save for retirement so we can have the funds to maintain a good quality of life in old age when we can't or don't want to work. This safety net ideal is the stated reason but to me it seems many people work hard to see the dollar amount increase, for the achievement, status and satisfaction that comes with work and because their job becomes so tied into their identity. If you're in the latter camp future value is irrelevant.
Are you in your 20s-40s and diligently saving for your retirement plan? You might want to reconsider your strategy.
Saving money with the intention of spending it in 20+ years will return (almost) nothing.
Superintelligence is likely to arrive within the next 20 years, probably sooner.
If superintelligence ends up being detrimental to humanity (doom), saving money is a waste.
What if we can align superintelligence with our values?
Technological advancements will significantly increase the utility per dollar. The exponential enhancement of utility per dollar brought on by superintelligence will eventually cause the value of a dollar to collapse.
Happiness has been shown to increase with income up to a certain threshold ($ 200K per year now, roughly speaking), beyond which the effect tends to plateau. This number will get ridiculously small as utility per dollar explodes (imagine $ 1/month gets you everything you need). That is, anyone can enjoy a quality of life comparable to that of the wealthy.
Whatever your life expectancy is, only plan the next 20 years. Beyond then, your dollars will return (almost) nothing.
What do you think?