canary_itm

Wiki Contributions

Comments

Sorted by
  1. I factually disagree with those fabricated numbers. AI leaders aren't stupid. Solving the control problem is in the interest of AI leaders, and so massive efforts are being undertaken in AI safety. Limiting their own capabilities that are amplified with AI is not in their interest, and so no effort goes into that. 
  2. Progress in AI is incremental (albeit rapidly accelerating), so even before they achieve a "singularity", an AI leader can use their latest AI models to take powerful actions in the world.
  3. "Your problem doesn't matter because we're all gonna die" does not meaningfully engage the question in the first place.

I’ve thought about the potentially information-hazardous nature of the post, and was hesitant about asking at first. Here’s why I think it will be net positive to discuss:

1. Only a limited number of people are in the situation where a powerful AI leader has a personal vendetta against them. 

2. The people to whom this situation applies are already aware of the threat. 

3. The unavailability of a counterargument to this threat is leading to negative outcomes.

There is, of course, the possibility that an unrelated reader could develop an irrational fear. But they could do that more plausibly with a scenario that applies to them. Among all the topics to be scared of, this one seems pretty safe because most people don’t qualify for the premise. 

I am, however, slightly worried that we are headed towards dynamics where powerful people in AI can no longer be challenged or held accountable by those around them. This may warrant a separate discussion post (by people other than me), to catalyze a broader discussion about unchecked power in AI and what to do about potentially misaligned human actors.