I think the defining feature of "weak pivotal act" idea was that it should be safe due to its weakness. So, any pivotal act that depends on aligned AGI (and would fail catastrophically if it is not aligned) is not weak.
Ah, that makes sense! I assumed weak just meant "isn't super sketch from a politics point of view", but I see how with that definition it is very hard (probably impossible).
Creating a self-replicating nanobot species that reliably detects and eats only dangerous agi would be an INCREDIBLE feat. Much more likely that there's a mutation somewhere and the error correction mechanisms fail and it starts eating other stuff too.
Also it would plausibly break a whole bunch of treaties and thereby potentially start wars.
Also creating nanobot species that can compete with existing life for fuel is really really hard actually. Much harder than AGI. Probably takes at least 6 months after AGI to do something like that! (I say, 6 months after LLM AGI,)
I've seen the phrase "there are no weak pivotal acts" pretty often, but I have not been able to locate where this is explained.
The prototypical example of a strong pivotal act is "nanobots that eat GPUs" (my understanding is that this is meant to be an oversimplified example). So for example, what would make "nanobots that eat evil AGIs" not a weak pivotal act: