I agree, and I have long intended to write something similar. Protecting AI from humans is just as important as protecting humans from AI, and I think it's concerning that AI organizations don't seem to take that aspect seriously.
Successful alignment as it's sometimes envisioned could be at least as bad, oppressive and dangerous as the worst-case scenario for unaligned AI (both scenarios likely a faith worse than extinction for either the AIs or humans), but I think the likelihood of successful alignment is quite low.
My uneducated guess is that we will end...
Isn't this more like the government taking more in taxes from poor people to give to rich people? The argument is that the policy is benefiting people who are already better off at the expense of people who are already worse off.