You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Bayeslisk comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread.

Comment author: Bayeslisk 05 August 2013 09:35:15PM 1 point [-]

Just curious: has anyone explored the idea of utility functions as vectors, and then extended this to the idea of a normalized utility function dot product? Because having thought about it for a long while, and remembering after reading a few things today, I'm utterly convinced that the happiness of some people ought to count negatively.

Comment author: Emile 07 August 2013 08:02:41PM *  3 points [-]

I was rereading Eliezer's old posts on morality, and in Leaky Generalizations ran across something pretty close to what you're talking about:

You can say, unconditionally and flatly, that killing anyone is a huge dose of negative terminal utility. Yes, even Hitler. That doesn't mean you shouldn't shoot Hitler. It means that the net instrumental utility of shooting Hitler carries a giant dose of negative utility from Hitler's death, and an hugely larger dose of positive utility from all the other lives that would be saved as a consequence.

Many commit the type error that I warned against in Terminal Values and Instrumental Values, and think that if the net consequential expected utility of Hitler's death is conceded to be positive, then the immediate local terminal utility must also be positive, meaning that the moral principle "Death is always a bad thing" is itself a leaky generalization. But this is double counting, with utilities instead of probabilities; you're setting up a resonance between the expected utility and the utility, instead of a one-way flow from utility to expected utility.

Or maybe it's just the urge toward a one-sided policy debate: the best policy must have no drawbacks.

In my moral philosophy, the local negative utility of Hitler's death is stable, no matter what happens to the external consequences and hence to the expected utility.

Of course, you can set up a moral argument that it's an inherently a good thing to punish evil people, even with capital punishment for sufficiently evil people. But you can't carry this moral argument by pointing out that the consequence of shooting a man with a leveled gun may be to save other lives. This is appealing to the value of life, not appealing to the value of death. If expected utilities are leaky and complicated, it doesn't mean that utilities must be leaky and complicated as well. They might be! But it would be a separate argument.

(I recommend reading the whole thing, as well as the few previous posts on morality if you haven't already)

Comment author: Bayeslisk 07 August 2013 08:10:08PM 1 point [-]

I have read some, but not this one. I will certainly do so.

Comment author: Manfred 06 August 2013 01:44:40AM 3 points [-]

The dot product is just yer' regular old integral over the domain, weighted in some (unspecified) way.

The thing is though, the average product over the whole infinite space of possibilities isn't much use when it comes to intelligent agents. This is because only one outcome really happens, and intelligent agents will try to choose a good one, not one that's representative of the average. If two wedding planners have opposite opinions about every type of cake except they both adore white cake with raspberry buttercream, then they'll just have white cake with raspberry buttercream - the fact that the inner product of their cake functions is negative a bajillion doesn't matter, they'll both enjoy the cake.

Comment author: Bayeslisk 06 August 2013 03:29:39PM 0 points [-]

Yeah, but Wedding Planner 1's deep vitriolic moral hatred of the lemon chiffon cake that delights Wedding Planner 2 that abused her as a young girl or Wedding Planner 2's thunderous personal objection to the enslavement of his family that went into making the cocoa for the devil's food cake that Wedding Planner 1 adores could easily make them refuse to share said delicious white cake with raspberry buttercream to the point where either would very happily destroy it to prevent the other from getting any. This seems suboptimal, though.

Comment author: RomeoStevens 05 August 2013 10:14:03PM 1 point [-]

Why would you want to throw out scalar information in a multi-term utility function?

Comment author: Bayeslisk 05 August 2013 10:25:01PM 1 point [-]

To figure out how much you care about other people being happy as defined by how much they want similar or compatible things to you, in a reasonably well-defined mathematical framework.

Comment author: RomeoStevens 08 August 2013 12:08:20AM 0 points [-]

Someone with the exact same utility terms but wildly different coefficients on them could well be considered quite unfriendly.

Comment author: Bayeslisk 08 August 2013 02:55:04AM 0 points [-]

Yes, that's the point. Everyone's utility vector would have the same length, which contains terms for everything it is conceivably possible to want. Otherwise, it would be difficult to take an inner product.

Comment author: Adele_L 05 August 2013 10:01:03PM 1 point [-]

I haven't explored that idea; can you be more specific about what this idea might bring to the table?

I'm utterly convinced that the happiness of some people ought to count negatively

Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?

Comment author: wedrifid 06 August 2013 06:53:47AM *  1 point [-]

Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?

The negative utility need not be boundless or even monotonic. A coherent preference system could count a modest amount of misery experienced by people fitting certain criteria to be positive while extreme misery and torture of the same individual is evaluated negatively.

Comment author: mwengler 07 August 2013 03:32:31PM 0 points [-]

The negative utility need not be boundless or even monotonic.

I also will upvote posts that have been downvoted too much, even if I wouldn't have upvoted them if they were at 0.

Comment author: Manfred 06 August 2013 01:17:29AM 1 point [-]

Trivially, nega-you who hates everything you like (oh, you want to put them out of their misery? Too bad they want to live now, since they don't want what you want). But such a being would certainly not be a human.

Comment author: Adele_L 06 August 2013 03:37:49AM 2 points [-]

This is not a being in the reference class "people".

Comment author: Bayeslisk 07 August 2013 07:10:52PM 0 points [-]

I'm not sure why you're both hung up on that the things hypothetical-me is interacting with need be human. Manfred: I address a similar entity in a different post. Adele_L: ...and?

Comment author: Adele_L 07 August 2013 10:05:34PM 0 points [-]

You said this:

I'm utterly convinced that the happiness of some people ought to count negatively

In this context, 'people' typically refers to a being with moral weight. What we know about morality comes from our intuitions mostly, and we have an intuitive concept 'person' which counts in some way morally. (Not necessarily a human, sentient aliens probably count as 'people', perhaps even dolphins.) Defining an arbitrary being which does not correspond to this intuitive concept needs to be flagged as such, as a warning that our intuitions are not directly applicable here.

Anyway, I get that you are basically trying to make a utility function with revenge. This is certainly possible, but having negative utility functions is a particularly bad way to do it.

Comment author: Bayeslisk 07 August 2013 10:10:28PM 0 points [-]

I was putting an upper bound on (what I thought at the time as) how negative the utility vector dot product would have to be for me to actually desire them to be unhappy. As to the last part, I am reconsidering this as possibly generally inefficient.

Comment author: mwengler 07 August 2013 03:27:16PM *  0 points [-]

upvoted because of your username.

But seriously, folks, what does it mean to dot one person's values/utility function in to another? It is actually the differences in individual's utility functions that enable gains from trade. So the differences in our utility functions are probably what make us rich.

Counting the happiness of some people negatively as a policy suggestion, is that the same as saying "it is not the enough that I win, it must also be that others lose?"

Comment author: Bayeslisk 07 August 2013 03:32:35PM 0 points [-]

I had initially thought that it would be something along the lines of "here is a vector, each component of which represents one thing you could want, take the inner product in the usual way, length has to always be 1." Gains from trade would be represented as "I don't want this thing as much as you do." I am now coming to the conclusion that this is at best incomplete, and that the suggestion of a weighted integral over a domain is probably better, if still incomplete.