As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.
We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!
Lemme sketch out a model here. We start with all the people who have influence on the direction of AI. We then break out two subgroups - US and Asia - and hypothesize that total influence of US goes down over time, and total influence of Asia goes up over time. Then we observe that you are in the US group, so this bodes poorly for your own personal influence. However, your own influence is small, which means that your contribution to the US' total influence is small. This means your own influence can vary more-or-less independently of the US total; a delta in your influence is not large enough to significantly cause a delta in the US total influence. Now, if there was some reason to think that your influence were strongly correlated with the US total, then the US total would matter. And there are certainly things we could think of which might make that true, but "US total influence" does not seem likely to be a stronger predictor of "Daniel's influence" than any of 50 other variables we could think of. The full pool of US AI researchers/influencers does not seem like all that great a reference class for Daniel Kokotajlo - and as long as your own influence is small relative to the total, a reference class is basically all it is.
An analogy: GDP is only very weakly correlated with my own income. If I had dramatically more wealth - like hundreds of millions or billions - then my own fortunes would probably become more tied to GDP. But as it is, using GDP to predict my income is effectively treating the whole US population as a reference class for me, and it's not a very good reference class.
Anyway, the more interesting part...
I apparently have very different models of how the people working on AI are likely to shift over time. If everything were primarily resource-constrained, then I'd largely agree with your predictions. But even going by current trends, algorithmic/architectural improvements matter at least as much raw resources. Giant organizations - especially governments - are not good at letting lots of people try their clever ideas and then quickly integrating the successful tricks into the main product. Big organizations/governments are all about coordinating everyone around one main plan, with the plan itself subject to lots of political negotiation and compromise, and then executing that plan. That's good for deploying lots of resources, but bad for rapid innovation.
Along similar lines, I don't think the primary world seat of innovation is going to shift from the US to China any time soon. China has the advantage in terms of raw population, but it's only a factor of 4 advantage; really not that dramatic a difference in the scheme of things. On the other hand, Western culture seems dramatically and unambiguously superior in terms of producing innovation, from an outside view. China just doesn't produce breakthrough research nearly as often. 20 years ago that could easily have been attributed to less overall wealth, but that becomes less and less plausible over time - maybe I'm just not reading the right news sources, but China does not actually seem to be catching up in this regard. (That said, this is all mainly based on my own intuitions, and I could imagine data which would change my mind.)
That said, I also don't think a US/China shift is all that relevant here either way; it's only weakly correlated with influence of this particular community. This particular community is a relatively small share of US AI work, so a large-scale shift would be dominated by the rest of the field, and the rationalist community in particular has many channels to grow/shrink in influence independent of the US AI community. It's essentially the same argument I made about your influence earlier, but this time applied to the community as a whole.
I do think "various other things might happen that effectively impose a discount rate" is highly relevant here. That does cut both ways, though: where there's a discount rate, there's a rate of return on investment, and the big question is whether rationalists have a systematic advantage in that game.