gRR comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong

29 Post author: Eliezer_Yudkowsky 28 January 2008 04:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (193)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 21 February 2012 01:49:43AM 0 points [-]

How big are the error bars on the odds that the murderer will kill two more people?

Comment author: gRR 21 February 2012 02:02:14AM 1 point [-]

Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the "total number of saved lives" utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the "total number of saved lives".

Comment author: pedanterrific 21 February 2012 03:34:13AM *  2 points [-]

So, let's say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you've saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed.

Success?

Comment author: gRR 21 February 2012 04:32:42AM 3 points [-]

No :( Not when you put it like that...

Do you conclude then that fairness worth zero human lives? Not even a 0.0000000001% probability of saving a life should be sacrificed for its sake?

Maybe it's my example that was stupid and better ones exist.

Comment author: orthonormal 21 February 2012 04:42:38AM *  1 point [-]

Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.)

I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder.

Being fair even when it's costly affects other people's decisions, not just the current case, and so a good consequentialist is very careful about fairness.

Comment author: gRR 21 February 2012 05:10:02AM 0 points [-]

I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn't the solution be "release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?" Somehow, this is still unfair.

Comment author: orthonormal 21 February 2012 05:33:55AM 2 points [-]

Well, that gets into issues of decision theory, and my intuition is that if you're playing non-zero-sum games with other agents smart enough to deduce what you might think, it's often wise to be predictably fair/honest.

(The idea you mention seems like "convince your partner to cooperate, then secretly defect", which only works if you're sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)

Comment author: gRR 21 February 2012 05:56:22AM *  2 points [-]

Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?

Well, maybe. I'm less sure than before.

But I'm still miles from relinquishing SPECKS :)

EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!

Comment author: [deleted] 21 February 2012 11:00:41PM 0 points [-]

Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?

Instrumental to what? To providing "utility"? Concepts of fairness arose to enhance inclusive fitness, not utility. If these norms are only instrumental, then so are the norms of harm-avoidance that we're focused on.

Since these norms often (but not always) "over-determine" action, it's easy to conceive of one of them explaining the other--so that, for example, fairness norms are seen as reifications of tactics for maximizing utility. But the empirical research indicates that people use at least five independent dimensions to make moral judgments: harm-avoidance, fairness, loyalty, respect, and purity.

EY's program to "renormalize" morality assumes that our moral intuitions evolved to solve a function, but fall short because of design defects (relative to present needs). But it's more likely that they evolved to solved different problems of social living.

Comment author: gRR 21 February 2012 11:39:17PM *  1 point [-]

I meant "instrumental values" as opposed to "terminal values", something valued as means to an end vs. something valued for its own sake.

It is universally acknowledged that human life is a terminal value. Also, the "happiness" of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?)

Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values.

And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then... unclear. Fortunately, orthonormal's article shows that if you take the long view, fairness doesn't really oppose the principal terminal value in the standard moral "examples", which (like mine) usually only look one short step ahead.