nshepperd comments on Not for the Sake of Pleasure Alone - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
Indeed they are not necessarily the same thing, which is why my utility function should not value that which I "want" but that which I "like"! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to "just" program an AI to maximize pleasure. What we "want" may be complex, but what we "like" may be simple. In fact, that would be better than programming an AI to make the world into what we "want" but not necessarily "like".
If you mean that others' mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn't possibly matter.
Why's that?
A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.
Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?
You haven't read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it?
Also relevant: Joy in the merely real
I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn't find any arguments for the insistence on placating wants rather than improving experience. I'll read it in full next week.