Programming human values into an AI is often taken to be very hard because values are complex (no argument there) and fragile. I would agree that values are fragile in the construction; anything lost in the definition might doom us all. But once coded into a utility function, they are reasonably robust.
As a toy model, let's say the friendly utility function U has a hundred valuable components - friendship, love, autonomy, etc... - assumed to have positive numeric values. Then to ensure that we don't lose any of these, U is defined as the minimum of all those hundred components.
Now define V as U, except we forgot the autonomy term. This will result in a terrible world, without autonomy or independence, and there will be wailing and gnashing of teeth (or there would, except the AI won't let us do that). Values are indeed fragile in the definition.
However... A world in which V is maximised is a terrible world from the perspective of U as well. U will likely be zero in that world, as the V-maximising entity never bothers to move autonomy above zero. So in utility function space, V and U are actually quite far apart.
Indeed we can add any small, bounded utility to W to U. Assume W is bounded between zero and one; then an AI that maximises W+U will never be more that one expected 'utiliton' away, according to U, from one that maximises U. So - assuming that one 'utiliton' is small change for U - a world run by an W+U maximiser will be good.
So once they're fully spelled out inside utility space, values are reasonably robust, it's in their initial definition that they're fragile.
That is far from a logical conclusion. Just because something isn't explicitly being maximised that doesn't mean it is not being produced in large quantities.
For example, the modern world is not actively maximising CO2 production - but, nontheless, it makes lots of CO2.
We have hundreds of instrumental values - and if one of them is not encoded as an ultimate preference it will make no difference at all - since it was never an ultimate preference in the first place. Autonomy is likely to be one of those. Humans don't value autonomy for its own sake, it rather is valued instrumentally - since is one of the many things that lets humans achieve their actual goals.
The problem arises when people try to wire-in instrumental values. The point of instrumental values is that they can change depending on circumstances - unless they are foolishly wired in as ultimate values - in which case you get an inflexible system that can't adapt to environmental changes so competently.
I know plenty of people who value autonomy as an inherent value. Many libertarians seem to even consider it more important than e.g. happiness. ("This government regulation might save lives and make people happier, true, but it is nevertheless morally wrong for government to regulate lives in such a manner.")