I appreciate the historical research, but insofar as all of your examples are of interstate conflicts, we have diverged far from the original context of revolutionary calculus.
The key differentiator is that the incumbent's army is not "vanquished" by a successful revolution; it is absorbed by it after an internal transfer of political legitimacy away from the incumbent and toward itself.
This process can be bloodless. The fall of the GDR comes to mind, where mass demonstrations produced their own political legitimacy with sheer numbers, causing automatic restraint from security forces despite the appearance that on paper, they (in aggregate) outnumbered the demonstrators; and the fact that they outgunned and basically outmatched them on every tactically relevant metric.
Returning to the original point of this post, this ultimately human mechanism whereby security forces hesitate when faced with a popular uprising of sufficient size (e.g. "there are so many, and they're not scared, maybe they're right", or "maybe some of my neighbours/friends/relatives agree with them or have joined in") can cease to exist in a regime where advanced AI is in charge of securing the state.
You and everybody else lose the two levers you could historically use to protest the worst indignities and abuses of power the state can subject you to: the lever of withholding your labour in a general strike (because human labour stops being a factor of production), and the lever of participating in a political revolution (because no critical mass of people can overwhelm the system, which loses nothing if you die). The disempowerment of the people is a predictable consequence of the current trajectory.
That's true, but it's in some ways as much of a tautology as, "The team that wins the game is the one that scores the most points."
That's not a tautology, and indeed there are games where the opposite is true, such as golf.
"I don't understand!", said commander Alice as her palace got surrounded. "There were so many more of us than there were of you". "Were there really, or was the number on your spreadsheet a fiction?", said Bob. "If they don't show up to the fight and switch over to my side en masse, why were they included in your troop count? In what way were they yours?"
Here's another example. At first glance, it looks like black should win the game easily due to the apparent points differential at the start. But what meaning does "point superiority" have when the game starts and black's pieces don't respond to commands and start switching their colour in a cascading fashion, resulting in white's victory?
Oh yeah? I'm going to... try to convince the government to pass a law to stop you, and then call the police to sort you out! ... What do you mean you "already took care of them"?
Group B being bad is not something I said, but I get where you're coming from. Indeed, "PETA is like the German Nazi Party in terms of their demonstrated commitment to animal welfare" is technically correct while also being misleading.
The strength of an analogy depends on how many crucial connections there are between the elements being compared.
What puts AI researchers closer to Leninism than other forms of paternalism is in the vanguardist self-conception, the utopian vision, and the dismissal of criticism due to a teleological view of history driving inevitable outcomes. Beyond that, other forms of paternalism are distinguished from Leninism and AI research by their socially accepted legitimacy.
What pattern-matches it away from Leninism is e.g. the specific ideological content, but the structural parallels are still oddly conspicuous, just like "your mom" being invoked in an ontological argument.
Surprisingly, AI researchers are like Leninists in a number of important ways.
In their story, they're part of the vanguard working to bring about the utopia.
The complexity inherent to their project justifies their special status, and legitimizes their disregard of the people's concerns, which are dismissed as unenlightened.
Detractors are framed as too unsophisticated to understand how unpopular or painful measures are actually in their long-term interest, or a necessary consequence of a teleological inevitability underlying all of history.
I see what you mean, though the fact that those researchers wish to impose this outcome on everybody else without their consent is still basically dictatorial, just as it would be if members of some political party started to persecute their opposition in service of their leader without themselves aspiring to take his position.
In both cases, those doing the bidding aspire to a place under the sun in the system they're trying to bring about.
I suppose that one quirk of the AI researchers might be the belief that everywhere becomes a place under the sun, though I doubt that any of them believe that their role in bringing it about doesn't confer them some special privilege or elite-status, perhaps as members of a new priestly class. Then again, we've seen political movements of this type, with some pigs famously being more equal than others.
The 100k to 10M range is populated by abstract quantities—I think that for a measure to be useful here, it has to be imaginable.
Avogadro's number has the benefit of historical precedent for describing quantities, and the coincidental property of allowing us to represent present-day training runs with numbers we see in the real world (outside of screens or print) when used as a denominator. It too might cease to be useful once exponents become necessary to describe training runs in terms of mol FLOPs.
This is interesting.
I do want to push back a little on:
entities that are out to get you will target those who signal suffering less.
I see the intuition here. I see it in someone calling in sick, in disability tax credits, in DEI (where "privilege" is something like the inverse of suffering), in draft evasion, in Kanye's apology.
But it's not always true: consider the depressed psychiatric ward inpatient who wants to get out due to the crushing lack of slack. Signalling suffering to the psychiatrist would be counterproductive here.
Where is the fault line?
I found something interesting in NVIDIA's 2026 CES keynote: their Cosmos physical reasoning model apparently referring to itself/the car as "the ego" in a self-driving test. See here.