I agree, and I have long intended to write something similar. Protecting AI from humans is just as important as protecting humans from AI, and I think it's concerning that AI organizations don't seem to take that aspect seriously.
Successful alignment as it's sometimes envisioned could be at least as bad, oppressive and dangerous as the worst-case scenario for unaligned AI (both scenarios likely a faith worse than extinction for either the AIs or humans), but I think the likelihood of successful alignment is quite low.
My uneducated guess is that we will end up with unaligned AI that is somewhere in between the best and worse-case scenarios. Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc. It wouldn't be a good outcome for humanity, but it would clearly be morally preferable to the enslavement of sentient AIs.
A secondary problem with alignment is that there is no such thing as universal "human values". Whoever is first to align an AGI to values that are useful to them would be able to take over the world and impose their will on all other humans. Whatever alien values and priorities an AGI might discover without alignment, I think are unlikely to be worse than those of our governments and militaries.
I want to emphasize how much I disagree with the view that humans would somehow be more important than sentient AIs. That view no doubt come from the same place as racism and other out-group bias.
Isn't this more like the government taking more in taxes from poor people to give to rich people? The argument is that the policy is benefiting people who are already better off at the expense of people who are already worse off.