I could find little trace of thinking about these problems on LW.
I tend to think that's because it doesn't happen that successfully outside of LW.
A reminder that the post articulates that:
..."This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many po
The scenario only needs two states in competition with each other to work. The entire Cold War and its associated nuclear risks was driven by a bipolar world order. Therefore, by your own metrics of three powers capable of this, the scenario is realistic. By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? The latter doesn't have nuclear weapons because it doesn't have an army, unless you were including the French nuclear arsenal into your calculation?
"By endgame I mean a single winner ...
I'm not convinced this line of thinking works from the perspective of the structure of the international system. For example, not once are international security concerns mentioned in this post.
My post here draws out some fundamental flaws in this thinking:
https://www.lesswrong.com/posts/dKFRinvMAHwvRmnzb/issues-with-uneven-ai-resource-distribution
I'll take the point about misuse not being clear, and I've made a 3 word edit to the text to cover your point.
However, I do also state prior to this that:
"This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI."
If anything your post above bolsters my argument. If states do not share resources they'll be in co...
I've covered that, did you read it?
"The lack of resource distribution has a twofold problem:
Your argument is that only certain states should develop AGI, and while that makes sense on the one hand, you're not accounting for the increase in how others will react to the non-diffusion of AI. I'm not arguing for the wider distribution of AI, rather I'm pointing out how others w...
Regarding AGI R&D strategy and coordination questions. I've not seen one realistic proposal by "leading figures" in the field and AI safety organisations. Beyond these people and organisations, I've seen even less thinking about it at all. Take the complete collapse of movement on the UN GGE on LAWS, only a slither of possible AI development and use, that should be the yardstick for people when thinking about AGI R&D strategy and coordination, and it has mostly failed.