You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on Non-personal preferences of never-existed people - Less Wrong Discussion

12 Post author: Stuart_Armstrong 10 March 2011 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread.

Comment author: DanielLC 11 March 2011 06:30:50AM 3 points [-]

I'm a classical utilitarian, so I don't have this problem.

If I were to accept preference utilitarianism, I'd say that fulfilled preferences are worth utility, and by bringing them into being I'd allow them to have fulfilled preferences.

Of course, I'd also say that you should lock people in small, brightly lit spaces to make them prefer big, empty, dark spaces, like most of the universe. Then they'd have really fulfilled preferences. Perhaps I just don't understand preference utilitarianism.

Comment author: atucker 12 March 2011 01:16:40AM 1 point [-]

In general, I think that most desires aren't fulfilled on a viscerally emotional level by the mere existence of something so much as actually receiving it. I'm not nearly as fulfilled by ice-cream's existence as I am fulfilled when I'm eating it.

I don't think those people would prefer having their preferences changed in that way.

Comment author: DanielLC 17 April 2011 06:30:39AM 0 points [-]

If you mean they have to get the emotion of a preference being fulfilled, isn't that happiness?

Comment author: Stuart_Armstrong 11 March 2011 10:45:30AM 1 point [-]

Care to specify that utility function that you claim to follow? :-)

Comment author: DanielLC 12 March 2011 05:50:44PM 0 points [-]

Maximize pleasure minus pain.

Comment author: Stuart_Armstrong 16 March 2011 03:52:55PM *  4 points [-]

Now I have two undefined terms, rather than one.

I'm not trying to be sophist here, I'm just pointing out that "classical utilitarians" are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.

But there's also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they're in there with the rest of us (as long as we are utilitarians of some type).

Comment author: endoself 16 March 2011 07:41:40PM *  2 points [-]

Thank you for showing me this!

Comment author: Stuart_Armstrong 16 March 2011 08:26:09PM 0 points [-]

Cheers :-)

Comment author: DanielLC 17 April 2011 06:33:06AM 0 points [-]

Most people's ethics are based on their desires. People's desires are based on what makes them happy. That's as far down as it goes.

A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what's happening now, you're happy. If you alter them away, you're sad.

Comment author: Stuart_Armstrong 19 April 2011 03:39:12PM *  0 points [-]

A utility function is quantitative, not qualitative.

How would you go about transforming these vague statements into precise mathematical definition?

(I'll grant you "black box rights"; you can use terms - anger, doubt, etc... - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the "anger" term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility - "utility is money" being the most trivial - are also valid if you don't want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).

Comment author: DanielLC 19 April 2011 07:53:09PM 0 points [-]

Utility is the dot product of the derivative of desires and the observations. Desires are what you attempt to make happen.

If you start trying to make what's currently happening happen more often, then you're happy.

Comment author: atucker 12 March 2011 01:19:21AM 0 points [-]

I don't think most utilitarians claim to follow (or even know) their utility function so much as assert that utility maximization is the proper way to resolve moral conflicts.

Kind of how like physicists claim that there would be a theory of everything without actually knowing what it is.

Comment author: Stuart_Armstrong 16 March 2011 03:59:00PM 1 point [-]

I perfectly agree that utility maximisation is indeed the proper way to resolve common moral conflicts.

But utility functions can be as complex as you need them to be! Saying you have a utility function does not constrain you virtually at all. But sometimes total utilitarians like to claim that their version is better because it is "simpler" or "more intuitive".

First of all, simplicity is not a virtue comparable with, say, human lives or happines, secondly I have different intuitions to them, and thirdly, their actual real utility function, if it were specified, would be unbelievably complex anyway.

I don't want to pour important moral insights down the drain, based on specious simplicity arguments....