Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
lexande30

Some altruistically-motivated projects would be valid investments for a Checkbook IRA. I guess if you wanted to donate 401k/IRA earnings to charity you'd still have to pay the 10% penalty (though not the tax if the donation was deductible) but that seems the same whether it's pretax or a heavily-appreciated Roth.

lexande10

The math in the comment I linked works the same whether the chance of money ceasing to matter in five years' time is for happy or unhappy reasons.

lexande54

My impression is that the "Substantially Equal Periodic Payments" option is rarely a good idea in practice because it's so inflexible in not letting you stop withdrawals later, potentially even hitting you with severe penalties if you somehow miss a single payment. I agree that most people are better off saving into a pretax 401k when possible and then rolling the money over to Roth during low-income years or when necessary. I don't think this particularly undermines jefftk's high-level point that tax-advantaged retirement savings can be worthwhile even conditional on relatively short expected AI timelines.

I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an AI capabilities explosion well before I reach 59.5. If I had all or most of my assets in Roth accounts it would be terrible.

Why would money in Roth accounts be so much worse than having in in pretax accounts in the AI explosion case? If you wanted the money (which would then be almost entirely earnings) immediately you could get it by paying tax+10% either way. But your accounts would be up so much that you'd only need a tiny fraction of them to fund your immediate consumption, the rest you could keep investing inside the 401k/IRA structure.

lexande101

You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to "present rate no singularity"/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn't know about AI.

lexande50

Note that it is entirely possible to invest in almost all "non-traditional" things within a retirement account; "checkbook IRA" is a common term for a structure that enables this (though the fees can be significant and most people should definitely stick with index funds). Somewhat infamously, Peter Thiel did much of his early angel investing inside his Roth IRA, winding up with billions of dollars in tax-free gains.

In particular it seems very plausible that I would respond by actively seeking out a predictable dark room if I were confronted with wildly out-of-distribution visual inputs, even if I'd never displayed anything like a preference for predictability of my visual inputs up until then.

It seems like a major issue here is that people often have limited introspective access to what their "true values" are. And it's not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you're doing is pretty closely related to other things you truly value. (And "just introspect harder" increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)

Here's an attempt to formalize the "is partying hard worth so much" aspect of your example:

It's common (with some empirical support) to approximate utility as proportional to log(consumption). Suppose Alice has $5M of savings and expected-future-income that she intends to consume at a rate of $100k/year over the next 50 years, and that her zero utility point is at $100/year of consumption (since it's hard to survive at all on less than that). Then she's getting log(100000/100) = 3 units of utility per year, or 150 over the 50 years.

Now she finds out that there's a 50% chance that the world will be destroyed in 5 years. If she maintains her old spending patterns her expected utility is .5*log(1000)*50 + .5*log(1000)*5 = 82.5. Alternately, if interest rates were 0%, she might instead change her plan to spend $550k/year over the next 5 years and then $50k/year subsequently (if she survives). Then her expected utility is log(5500)*5+.5*log(500)*45 = 79.4, which is worse. In fact her expected utility is maximized by spending $182k over the next five years and $91k after that, yielding an expected utility of about 82.9, only a tiny increase in EV. If she has to pay extra interest to time-shift consumption (either via borrowing or forgoing investment returns) she probably just won't bother. So it seems like you need very high confidence of very short timelines before it's worth giving up the benefits of consumption-smoothing.

Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).

lexande124

I think it generally makes sense to try to smooth personal consumption, but that for most people I know this still implies a high savings rate at their first high-paying job.

  • As you note, many of them would like to eventually shift to a lower-paying job, reduce work hours, or retire early.
  • Even if this isn't their current plan, burnout is a major risk in many high-paying career paths and might oblige them to do so, and so there's a significant probability of worlds where the value of having saved up money during their first high-paying job is large.
  • If they're software engineers in the US they face the risk that US software engineer salaries will revert to the mean of other countries and other professional occupations. https://www.jefftk.com/p/programmers-should-plan-for-lower-pay
  • If they want but don't currently have children, then even if their income is higher later in their career, it's likely that their income-per-household-member won't be. Childcare and college costs mean they should probably be prepared to spend more per child in at least some years than they currently do on their own consumption.
Load More