I've seen the phrase "there are no weak pivotal acts" pretty often, but I have not been able to locate where this is explained.

The prototypical example of a strong pivotal act is "nanobots that eat GPUs" (my understanding is that this is meant to be an oversimplified example). So for example, what would make "nanobots that eat evil AGIs" not a weak pivotal act:

  1. It actually is a weak pivotal act, and I'm right and everyone else is wrong. (If only, right?)
  2. It is not weak because weakness means being passive in some way, and detecting and eating evil AGIs is too active.
  3. Even if the nanobots only eat evil AGIs, it's not weak because it's technically illegal. (Note that I think other AI labs would actually like the evil AGI eating nanobots because it means they don't need to worry about safety anymore. They can create any type of AI other than evil AGI, and if they make one accidentally the nanobots eat it before their evil AGI kills them.)
  4. Even an aligned super-intelligence couldn't safely accomplish this act accurately (for example, the nanobots might accidentally eat non-AGIs).
  5. Something else?
New Comment
3 comments, sorted by Click to highlight new comments since:

I think the defining feature of "weak pivotal act" idea was that it should be safe due to its weakness. So, any pivotal act that depends on aligned AGI (and would fail catastrophically if it is not aligned) is not weak.

Ah, that makes sense! I assumed weak just meant "isn't super sketch from a politics point of view", but I see how with that definition it is very hard (probably impossible).

Creating a self-replicating nanobot species that reliably detects and eats only dangerous agi would be an INCREDIBLE feat. Much more likely that there's a mutation somewhere and the error correction mechanisms fail and it starts eating other stuff too.

Also it would plausibly break a whole bunch of treaties and thereby potentially start wars.

Also creating nanobot species that can compete with existing life for fuel is really really hard actually. Much harder than AGI. Probably takes at least 6 months after AGI to do something like that! (I say, 6 months after LLM AGI,)