I know plenty of people who value autonomy as an inherent value. Many libertarians seem to even consider it more important than e.g. happiness. ("This government regulation might save lives and make people happier, true, but it is nevertheless morally wrong for government to regulate lives in such a manner.")
That raises two issues.
One is how you know what their inherent values are.
The other is that many humans are pretty reprogrammable - and can be made to strongly value a lot of different things, if exposed to the appropriate memes. The catholic values the holy trinity and so on. Meme-derived values can often be shown to not be particularly "intrinisic", though - since they can frequently be "washed away" by exposure to more memes - in phenomena such as religious conversions.
A catholic no more has an "intrinisic" preference for...
Programming human values into an AI is often taken to be very hard because values are complex (no argument there) and fragile. I would agree that values are fragile in the construction; anything lost in the definition might doom us all. But once coded into a utility function, they are reasonably robust.
As a toy model, let's say the friendly utility function U has a hundred valuable components - friendship, love, autonomy, etc... - assumed to have positive numeric values. Then to ensure that we don't lose any of these, U is defined as the minimum of all those hundred components.
Now define V as U, except we forgot the autonomy term. This will result in a terrible world, without autonomy or independence, and there will be wailing and gnashing of teeth (or there would, except the AI won't let us do that). Values are indeed fragile in the definition.
However... A world in which V is maximised is a terrible world from the perspective of U as well. U will likely be zero in that world, as the V-maximising entity never bothers to move autonomy above zero. So in utility function space, V and U are actually quite far apart.
Indeed we can add any small, bounded utility to W to U. Assume W is bounded between zero and one; then an AI that maximises W+U will never be more that one expected 'utiliton' away, according to U, from one that maximises U. So - assuming that one 'utiliton' is small change for U - a world run by an W+U maximiser will be good.
So once they're fully spelled out inside utility space, values are reasonably robust, it's in their initial definition that they're fragile.