I think associating words like "friendship, love, autonomy" with elements of a toy model is wrong
Well, sure. All models are wrong. But models which are so constricted are likely to be more wrong than others. If someone tries to explain a math problem with "Suppose you have two apples, and someone gives you two more apples", then it's not always helpful to insist that they tell you how ripe each of the apples is, or what trees they were picked from.
If you're trying to make the point that something about the very concept of a multicomponent utility function is self-contradictory, then you should say so more plainly. If you just don't like some of these examples of components, then make your own preferred substitutions and see if his thesis starts to make sense.
And on that last point: "It doesn't make sense to me" does not imply "it doesn't make sense". "I don't understand your model, could you explain a few things" would have been more polite and less inaccurate than "could have only literary merit and no basis".
Programming human values into an AI is often taken to be very hard because values are complex (no argument there) and fragile. I would agree that values are fragile in the construction; anything lost in the definition might doom us all. But once coded into a utility function, they are reasonably robust.
As a toy model, let's say the friendly utility function U has a hundred valuable components - friendship, love, autonomy, etc... - assumed to have positive numeric values. Then to ensure that we don't lose any of these, U is defined as the minimum of all those hundred components.
Now define V as U, except we forgot the autonomy term. This will result in a terrible world, without autonomy or independence, and there will be wailing and gnashing of teeth (or there would, except the AI won't let us do that). Values are indeed fragile in the definition.
However... A world in which V is maximised is a terrible world from the perspective of U as well. U will likely be zero in that world, as the V-maximising entity never bothers to move autonomy above zero. So in utility function space, V and U are actually quite far apart.
Indeed we can add any small, bounded utility to W to U. Assume W is bounded between zero and one; then an AI that maximises W+U will never be more that one expected 'utiliton' away, according to U, from one that maximises U. So - assuming that one 'utiliton' is small change for U - a world run by an W+U maximiser will be good.
So once they're fully spelled out inside utility space, values are reasonably robust, it's in their initial definition that they're fragile.