Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Inseparably Right; or, Joy in the Merely Good

21 Post author: Eliezer_Yudkowsky 09 August 2008 01:00AM

Followup toThe Meaning of Right

I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics.  Here then is an attempted rephrase:

There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.

What do you value?  At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways.  You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you've acted on this supposed preference.  If you're reading this blog you probably attach some value to truth for the sake of truth.  If you've ever learned to play a musical instrument, or paint a picture, or if you've ever solved a math problem for the fun of it, then you probably attach real value to good art.  You value your freedom, the control that you possess over your own life; and if you've ever really helped someone you probably enjoyed it.  You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe.  You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.

And you value many more things than these.

Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call "good".

There's no way you can detach the little label from what it stands for, and still make ontological or moral sense.

Why might the little 'good' label seem detachable?  A number of reasons.

Mainly, that's just how your mind is structured—the labels it attaches internally seem like extra, floating, ontological properties.

And there's no one value that determines whether a complicated event is good or not—and no five values, either.  No matter what rule you try to describe, there's always something left over, some counterexample.  Since no single value defines goodness, this can make it seem like all of them together couldn't define goodness.  But when you add them up all together, there is nothing else left.

If there's no detachable property of goodness, what does this mean?

It means that the question, "Okay, but what makes happiness or self-determination, good?" is either very quickly answered, or else malformed.

The concept of a "utility function" or "optimization criterion" is detachable when talking about optimization processes.  Natural selection, for example, optimizes for inclusive genetic fitness.  But there are possible minds that implement any utility function, so you don't get any advice there about what you should do.  You can't ask about utility apart from any utility function.

When you ask "But which utility function should I use?" the word should is something inseparable from the dynamic that labels a choice "should"—inseparable from the reasons like "Because I can save more lives that way."

Every time you say should, it includes an implicit criterion of choice; there is no should-ness that can be abstracted away from any criterion.

There is no separable right-ness that you could abstract from pulling a child off the train tracks, and attach to some other act.

Your values can change in response to arguments; you have metamorals as well as morals.  So it probably does make sense to think of an idealized good, or idealized right, that you would assign if you could think of all possible arguments.  Arguments may even convince you to change your criteria of what counts as a persuasive argument.  Even so, when you consider the total trajectory arising out of that entire framework, that moral frame of reference, there is no separable property of justification-ness, apart from any particular criterion of justification; no final answer apart from a starting question.

I sometimes say that morality is "created already in motion".

There is no perfect argument that persuades the ideal philosopher of perfect emptiness to attach a perfectly abstract label of 'good'.  The notion of the perfectly abstract label is incoherent, which is why people chase it round and round in circles.  What would distinguish a perfectly empty label of 'good' from a perfectly empty label of 'bad'?  How would you tell which was which?

But since every supposed criterion of goodness that we describe, turns out to be wrong, or incomplete, or changes the next time we hear a moral argument, it's easy to see why someone might think that 'goodness' was a thing apart from any criterion at all.

Humans have a cognitive architecture that easily misleads us into conceiving of goodness as something that can be detached from any criterion.

This conception turns out to be incoherent.  Very sad.  I too was hoping for a perfectly abstract argument; it appealed to my universalizing instinct.  But...

But the question then becomes: is that little fillip of human psychology, more important than everything else?  Is it more important than the happiness of your family, your friends, your mate, your extended tribe, and yourself?  If your universalizing instinct is frustrated, is that worth abandoning life?  If you represented rightness wrongly, do pictures stop being beautiful and maths stop being elegant?  Is that one tiny mistake worth forsaking the gift we could give to tomorrow?  Is it even really worth all that much in the way of existential angst?

Or will you just say "Oops" and go back to life, to truth, fun, art, freedom, challenge, humor, moral arguments, and all those other things that in their sum and in their reflective trajectory, are the entire and only meaning of the word 'right'?

Here is the strange habit of thought I mean to convey:  Don't look to some surprising unusual twist of logic for your justification.  Look to the living child, successfully dragged off the train tracks.  There you will find your justification.  What ever should be more important than that?

I could dress that up in computational metaethics and FAI theory—which indeed is whence the notion first came to me—but when I translated it all back into human-talk, that is what it turned out to say.

If we cannot take joy in things that are merely good, our lives shall be empty indeed.

 

Part of The Metaethics Sequence

Next post: "Sorting Pebbles Into Correct Heaps"

Previous post: "Morality as Fixed Computation"

Comments (29)

Sort By: Old
Comment author: GBM 09 August 2008 01:12:20AM 0 points [-]

Eliezer, thank you for this clear explanation. I'm just now making the connection to your calculator example, which struck me as relevant if I could only figure out how. Now it's all fitting together.

How does this differ from personal preference? Or is it simply broader in scope? That is, if an individual's calculation includes "self-interest" and weighs it heavily, personal preference might be the result of the calculation, which fits inside your metamoral model, if I'm reading things correctly.

Comment author: Eliezer_Yudkowsky 09 August 2008 01:28:13AM 5 points [-]

How does this differ from personal preference?

Most goods don't depend justificationally on your state of mind, even though that very judgment is implemented computationally by your state of mind. A personal preference depends justificationally on your state of mind.

Comment author: TGGP4 09 August 2008 02:09:24AM 2 points [-]

If we cannot take joy in things that are merely good, our lives shall be empty indeed I suppose the ultimate in emptiness is non-existence. What's your opinion on anti-natalism?

Comment author: Tyrrell_McAllister2 09 August 2008 05:14:03AM 1 point [-]

Eliezer, you write, "Most goods don't depend justificationally on your state of mind, even though that very judgment is implemented computationally by your state of mind. A personal preference depends justificationally on your state of mind."

Could you elaborate on this distinction? (IIRC, most of what you've written explicitly on the difference between preference and morality was in your dialogues, and you've warned against attributing any views in those dialogues to you.)

In particular, in what sense do "personal preferences depend justificationally on your state of mind"? If I want to convince someone to prefer rocky road ice cream over almond praline, I would most likely proceed by telling them about the ingredients in rocky road that I believe that they like more than the ingredients in almond praline. Suppose that I know that you prefer walnuts over almonds. Then my argument would include lines like "rocky road contains walnuts, and almond praline contains almonds." These would not be followed by something like "... and you prefer walnuts over almonds." Yes, I wouldn't have offered the comparison if I didn't believe that that was the case, but, so far as the structure of the argument is concerned, such references to your preferences would be superfluous. Rather, as you've explained with morality, I would be attempting to convince you that rocky road has certain properties. These properties are indeed the ones that I think will make the system of preferences within you prefer rocky road over almond praline. And, as with morality, that system of preferences is a determinate computational property of your mind as it is at the moment. But, just as in your account of moral justification as I understand it, I don't need to refer to that computational property to make my case. I will just try to convince you that the facts are such that certain things are to be found in rocky road. These are things that happen to be preferred by your preference system, but I won't bother to try to convince you of that part.

Actually, the more I think about this ice cream example, the more I wonder whether you wouldn't consider it to be an example of moral justification. So, I'm curious to know an example of what you would consider to be a personal preference but not a moral preference.

Comment author: Ben_Jones 09 August 2008 10:36:45AM 0 points [-]

I too was hoping for a perfectly abstract argument; it appealed to my universalizing instinct. But...

Not to mention your FAI-coding instincts, huh?

Good summarizing post.

Comment author: Pyramid_Head3 09 August 2008 12:05:01PM 0 points [-]

Good post, Eliezer. Now that I've read it (and the previous one), I can clearly see (I think) why you think CEV is a good idea, and how you arrived at it. And now I'm not as skeptical about it as I was before.

Comment author: Eliezer_Yudkowsky 09 August 2008 12:05:30PM 1 point [-]

Ben, my FAI-coding instincts at the time were pretty lousy. The concept does not appeal to my modern instinct; and calling the instinct I had back then an "FAI-coding" one is praising it too highly.

Tyrrell, the distinction to which I refer, is the role that "Because I like walnuts over almonds" plays in my justification for choosing rocky road, and presumably your motive for convincing me thereof if you're an altruist. We can see the presence of this implicit justification, whether or not it is mentioned, by asking the following moral question: "If-counterfactual I came to personally prefer almonds over walnuts, would it be right for me to choose praline over rocky road?" The answer, "Yes", reveals that that there is an explicit, quoted, justificational dependency, in the moral computation, on my state of mind and preference.

This is not to be confused with a physical causal dependency of my output on my brain, which always exists, even for the calculator that asks only "What is 2 + 3?" The calculator's output depends on its transistors, but it has not asked a personal-preference-dependent question.

Comment author: Robin_brandt3 09 August 2008 01:30:42PM 0 points [-]

Beautiful, and very true indeed. Nothing new, but your way of expression is so elegant! Your mind is clear and genuine, this fills me with joy and hope!

Comment author: Caledonian2 09 August 2008 02:45:02PM 0 points [-]

[deleted]

Comment author: George_Weinberg2 09 August 2008 04:47:04PM 1 point [-]

I think everything you say in this post is correct. But there's nothing like a universal agreement as to what is "good", and although our ideas as to what is good will change over time, I see no reason to believe that they will converge.

Comment author: Eliezer_Yudkowsky 09 August 2008 05:40:10PM 5 points [-]

The question of whose rightness gets to go into the AI still arises, and I don't think that the solution you have outlined is really up to the task of producing a notion of rightness that everyone on the planet agrees with.

That's not what CEV is for. It's for not taking over the world, or if you prefer, not being a jerk, to the maximum extent possible. The maximum extent impossible is not really on the table.

I would endorse a more limited effort which focused on recreating the most commonly accepted values our society: namely rational western values.

Then you have very little perspective on your place in history, my dear savage barbarian child.

I would also want to work on capturing the values of our society as a narrow AI problem

That ain't a narrow AI problem and you ain't doin' it with no narrow AI.

I think that there is a lot to be said about realist and objective ethics

My metaethics is real and objective, just not universal. Fixed computations are objective, and at least halfway real.

Comment author: steven 09 August 2008 06:52:16PM 0 points [-]

It seems to me human life has value insofar as dead people can't be happy, discover truth, and so on; but not beyond that.

Also I'd like to second TGGP's question.

Comment author: Eliezer_Yudkowsky 09 August 2008 07:13:55PM 10 points [-]

My position on natalism is as follows: If you can't create a child from scratch, you're not old enough to have a baby.

This rule may be modified under extreme and unusual circumstances, such as the need to carry on the species in the pre-Singularity era, but I see no reason to violate it under normal conditions.

Comment author: steven 09 August 2008 07:43:46PM 0 points [-]

Presumably anti-natalists would deny the need to carry on the species because they expect the negative value of future suffering to outweigh the positive value of future happiness, truth, etc.

Comment author: Eliezer_Yudkowsky 09 August 2008 09:44:45PM 0 points [-]

Yes, Roko, and the answer to the question "Was the child successfully dragged off the train tracks?" does not depend on the belief or feelings of any person or group of persons; if the child is off the train tracks, that is true no matter what anyone believes, hopes, wishes, or feels. As this is what I identify with the meaning of the term, 'good'...

Comment author: Wiseman 09 August 2008 10:17:45PM 0 points [-]

@Eliezer: "As this is what I identify with the meaning of the term, 'good'..."

I'm still a little cloudy about one thing though Eliezer, and this seems to be the point Roko is making as well. Once you have determined what physically has happened in a situation, and what has caused it, how do inarguably decide that it is "good" or "bad"? Based on what system of prefering one physical state over another?

Obviously, saving a child from death is good, but how do you decide in trickier situations where intuition can't do the work for you, and where people just can't agree on anything, like say, abortion?

Comment author: Hopefully_Anonymous 09 August 2008 10:58:52PM 0 points [-]

I think the child on train tracks/orphan in burning building tropes you reference back to prey on bias, rather than seek to overcome it. And I think you've been running from hard questions rather than dealing with them forthrightly (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like "do not murder children" or minimizing horrific outcomes). To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.

Comment author: Hopefully_Anonymous 10 August 2008 12:44:43AM 0 points [-]

should read: (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like "do not murder children").

Comment author: Richard4 10 August 2008 12:57:24AM 0 points [-]

"My notion of goodness may be slightly different to yours - how can we have a sensible conversation where you insist on using the word "morality" to refer to morality_Eliezer2008?"

This is an important objection, which I think establishes the inadequacy of Eliezer's analysis. It's a datum (which any adequate metaethical theory must account for) that there can be substantive moral disagreement. When Bob says "Abortion is wrong", and Sally says, "No it isn't", they are disagreeing with each other.

I don't see how Eliezer can accommodate this. On his account, what Bob asserted is true iff abortion is prohibited by the morality_Bob norms. How can Sally disagree? There's no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob. On the other hand, it would be changing the subject for Sally to say "Abortion is right" in her own vernacular, where this merely means that abortion is permitted by the morality_Sally norms. (Bob wasn't talking about morality_Sally, so their two claims are - on Eliezer's account - quite compatible.)

Since there is moral disagreement, whatever Eliezer purports to be analysing here, it is not morality.

[For more detail, see 'Is Normativity Just Semantics?]

Comment author: Larry_D'Anna 10 August 2008 01:13:01AM 1 point [-]

Roko: "And, of course, this lack of objectivity leads to problems, because different people will have their own notions of goodness."

Don't forget the psychological unity of mankind. Whatever is in our DNA that makes us care about morality at all is a complex adaptation, so it must be pretty much the same in all of us. That doesn't mean everyone will agree about what is right in particular cases, because they have considered different moral arguments (or in some cases, confused mores with morals), but that-which-responds-to-moral-arguments is the same.

Comment author: Larry_D'Anna 10 August 2008 01:16:34AM 1 point [-]

Richard: Abortion isn't a moral debate. The only reason people disagree about it is because some of them don't understand what souls are made of, and some of them do. Abortion is a factual debate about the nature of souls. If you know the facts, the moral conclusions are indisputable and obvious.

Comment author: Richard4 10 August 2008 01:33:14AM 0 points [-]

Larry, not that the particular example is essential to my point, but you're clearly not familiar with the strongest pro-life arguments.

Comment author: steven 10 August 2008 02:00:54AM 1 point [-]

There's no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob.

This isn't the clearest example, because it seems like abortion is one of those things everyone would come to agree on if they knew and understood all the arguments. A clearer example is a pencil-maximizing AI vs a paperclip-maximizing AI. Do you think that these two necessarily disagree on any facts? I don't.

Comment author: Allan_Crossman 10 August 2008 03:00:40AM 0 points [-]

It's a datum (which any adequate metaethical theory must account for) that there can be substantive moral disagreement. When Bob says "Abortion is wrong", and Sally says, "No it isn't", they are disagreeing with each other.

I wonder though: is this any more mysterious than a case where two children are arguing over whether strawberry or chocolate ice cream is better?

In that case, we would happily say that the disagreement comes from their false belief that it's a deep fact about the universe which ice cream is better. If Eliezer is right (I'm still agnostic about this), wouldn't moral disagreements be explained in an analogous way?

Comment author: Larry_D'Anna 10 August 2008 03:43:17AM 1 point [-]

Richard: You were correct. That is indeed the strongest pro-life argument I've ever read. And although it is quite wrong, the error is one of moral reasoning and not merely factual.

Comment author: steven 10 August 2008 10:53:14PM 0 points [-]

HA: To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.

If you were already blogging and started an anonymous blog, how would you avoid giving away your identity in your anonymous blog through things like your style of thinking, or the sort of background justifications you use? It doesn't seem to me like it could be done.

Comment author: David_J._Balan 12 August 2008 01:30:04AM 0 points [-]

It seems to me like the word "axioms" belongs in here somewhere.

Comment author: Manon_de_Gaillande 30 August 2008 06:19:54PM -2 points [-]

Eliezer, sure, but that can't be the *whole* story. I don't care about some of the stuff most people care about. Other people whose utility functions differ in similar but different ways from the social norm are called "psychopaths", and most people think they should either adopt their morals or be removed from society. I agree with this.

So why should I make a special exception for myself, just because that's who I happen to be? I try to behave as if I shared common morals, but it's just a gross patch. It feels tacked on, and it is.

I expected (though I had no idea how) you'd come up with an argument that would convice me to fully adopt such morals. But what you said would apply to *any* utility function. If a paperclip maximizer wondered about morality, you could tell it: "'Good' means 'maximizes paperclips'. You can think about it all day long, but you'd just end up making a mistake. Is that worth forsaking the beauty of tiling the universe with paperclips? What do you care there exists in mindspace minds that drag children off train tracks?" and it'd work just as well. Yet if you could, I bet you'd choose to make the paperclip maximizer adopt your morals.

Comment author: Shinnen 21 October 2011 02:48:13PM -1 points [-]

Hi, This discussion borders on a thought that I've had for a long time, and haven't quite come to terms with; and that is the idea that there are places were reason can explain things, and places where reason cannot explain things, the latter being by far the more frequent. It seems to me that the basis of most of our actions, motivations, thoughts are really grounded in feelings/desires/emotions...... what we want, what we like, what we want to be.... and that the application of reason to our lives, is in most cases.... a means of justifying and acting on these feelings and desires. We are not rational creatures. We can, and do, apply reason very effectively to certain areas of human endeavour, but in most of the things we do, it's not very effective. I'm not knocking reason.... it can be very useful. I'm sure that I have not explained myself very well. Perhaps someone with more knowledge and insight into what I'm trying to say can flesh it out. I apologize if I did not address the issue under discussion, but it provided me with an opportunity to get this idea out, and see what others have to say about it. ...... john