Thanks for the comments!
I'm especially keen to explore bottlenecks (e.g. another suggestion I saw is that to reach 1 billion a year would require 10x current global lithium production to supply the batteries.)
A factor of 2 for increased difficultly due to processing intensity seems reasonable, and I should have thrown it in. (Though my estimates were to an order of magnitude so this probably won't change the bottom line, and on the other side, many robots will weigh <100kg and some will be non-humanoid.)
Thanks, and fair points!
Note that it you convert only half the car factories, you can still produce 0.5 billion robots per year, so it doesn't change the basics picture that much. (It's all order of magnitude stuff.)
I talk a little about some other estimates - a standard trajectory would be 20-30 years on the long end. ASI enabled could be even faster than 5yr. I agree it would be nice to flesh these out more.
Also agree it would be good to figure out the conversion efficiency better. One factor on the other side is robots involve lighter parts, which apparently makes it easier. Ideally we'd also check for other input factors that could bottleneck production -eg lithium for batteries at over 100m.
That's helpful! Makes me think the all in hardware costs could be off by a factor of 2x.
I did wonder about maintenance costs, but I figured they wouldn't change the picture too much because I only assume an avg 3 year lifetime for the robot, and figured they wouldn't need a huge amount of maintenance to make it to that point.
Moreover, if there's worthwhile maintenance that extends the lifetime further, then the hardware costs could end up cheaper than my per year estimate.
I'm also envisioning the costs after a big scale up, and there would be robot repair shops as numerous as car repair, rather than needing to fly in specialists.
That said, I agree it would be interesting to look at how much is spent on car maintenance per year on a car vs. capital costs. (I expect it would be under 10%?)
I'd be happy to put the opening bunch of paragraphs. I was feeling reluctant to cross-post because I often update my articles as I learn more about a topic, and I don't want to keep multiple versions in sync (especially for a lower priority article).
Yes - if anyone reading knows more about manufacturing and could comment on how easy it would be to convert, that would be very helpful.
I also agree it would be interesting to try to do more analysis of how much ASI and robotics could speed up construction of robot factories, by looking at different bottlenecks and how much they could help.
I'm not sure a robot workforce would have a huge effect initially, since there's already a large pool of human workers (though maybe you get some boost by making everything run 24/7). However, at later stages it might become hard to hire enough human workers, while with robots you could keep scaling.
Thanks, great comment.
Seems like we roughly agree on the human-only case. My thinking was that the profit margin would initially be 90-99%, which would create huge economic incentives. Though incentives and coordination were probably stronger in WW2, which could make things slower. Also 10x per year for 5 years sounds like a lot – helpful to point out they didn't quite achieve that in WW2.
With ASI, I agree something like another 5x speed-up sounds plausible.
I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that's where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don't think it's a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let's focus on the altruistic case – I'm very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There's 500 trillion of world wealth, so if you have $1m now, that's 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you'd be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It's also not obvious to me that donating now dominates so much that I'd want to allocate 0% to the other scenario.
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech).
For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn.
I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
I'm open to that and felt unsure the post was a good idea after I released it. I had some discussion with him on twitter afterwards, where we smoothed things over a bit: https://x.com/GaryMarcus/status/1888604860523946354