I’ve thought about the potentially information-hazardous nature of the post, and was hesitant about asking at first. Here’s why I think it will be net positive to discuss:
1. Only a limited number of people are in the situation where a powerful AI leader has a personal vendetta against them.
2. The people to whom this situation applies are already aware of the threat.
3. The unavailability of a counterargument to this threat is leading to negative outcomes.
There is, of course, the possibility that an unrelated reader could develop an irrational fear. But they could do that more plausibly with a scenario that applies to them. Among all the topics to be scared of, this one seems pretty safe because most people don’t qualify for the premise.
I am, however, slightly worried that we are headed towards dynamics where powerful people in AI can no longer be challenged or held accountable by those around them. This may warrant a separate discussion post (by people other than me), to catalyze a broader discussion about unchecked power in AI and what to do about potentially misaligned human actors.