Alicorn comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 01 February 2010 01:09:04AM 5 points [-]

The rampant dismissal of so many restatements of your position has tempted me to try my own. Tell me if I've got it right or not:

There is a topic, which covers such subtopics as those listed here, which is the only thing in fact referred to by the English word "morality" and associated terms like "should" and "right". It is an error to refer to other things, like eating babies, as "moral" in the same way it would be an error to refer to black-and-white Asian-native ursine creatures as "lobsters": people who do it simply aren't talking about morality. Once the subject matter of morality is properly nailed down, and all other facts are known, there's no room for disagreement about morality, what ought to be done, what actions are wrong, etc. any more than there is about the bachelorhood of unmarried men. However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating. Humans, as a matter of a rather lucky causal history, do care about morality, in much the same way that pebblesorters care about primes - it's just one of the things we're built to find worth thinking about and working towards. By a similar token, we are responsive to arguments about features of situations that give them moral character of one sort or another.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:26:08AM *  1 point [-]

...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.

Comment author: Unknowns 01 February 2010 07:28:36AM 7 points [-]

Eliezer, I don't understand how you can say that the "lucky causal history" wasn't luck, unless you also say "if humans had evolved to eat babies, babyeating would have been right."

If it wouldn't have been right even in that event, then it took a stupendous amount of luck for us to evolve in just such a way that we care about things that are right, instead of other things.

Either that or there is a shadowy figure.

Comment author: aleksiL 01 February 2010 04:43:14PM 2 points [-]

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".

Comment author: Unknowns 01 February 2010 05:04:36PM 3 points [-]

Precisely. So it was luck that we instantiate this algorithm, instead of a different one.

Comment author: Alicorn 02 February 2010 06:00:31AM *  6 points [-]

I'm curious about how your idea handles an edge case. (I am merely curious - not to downplay curiosity, but you shouldn't consider it a reason to devote considerable brain-cycles on its own if it'd take considerable brain-cycles to answer, because I think your appropriation of moral terminology is silly and I won't find the answer useful for any specific purpose.)

The edge case: I have invented an alien species called the Zaee (for freeform roleplaying game purposes; it only recently occurred to me that they have bearing on this topic). The Zaee have wings, and can fly starting in early childhood. They consider it "loiyen" (the Zaee word that most nearly translates as "morally wrong") for a child's birth mother to continue raising her offspring (call it a son) once he is ready to take off for the first time; they deal with this by having her entrust her son to a friend, or a friend of the father, or, in an emergency, somebody who's in a similar bind and can just swap children with her. Someone who has a child without a plan for how to foster him out at the proper time (even if it's "find a stranger to swap with") is seen as being just as irresponsible as a human mother who had a child without a clue how she planned to feed him would be (even if it's "rely on government assistance").

There is no particular reason why a Zaee child raised to adulthood by his biological mother could not wind up within the Zaee-normal range of psychology (not that they'd ever let this be tested experimentally); however, they'd find this statement about as compelling as the fact that there's no reason a human child, kidnapped as a two-year-old from his natural parents and adopted by a duped but competent couple overseas, couldn't grow up to be a normal human: it still seems a dreadful thing to do, and to the child, not just to the parents.

When Zaee interact with humans they readily concede that this precept of their <moral system> has no bearing on any human action whatever: human children cannot fly. And in the majority of other respects, Zaee are like humans in their <morality> - if you plopped a baby Zaee brain in a baby human body (and resolved the body dysphoria and aging rate issues) and he grew up on Earth, he'd be darned quirky, but wouldn't be diagnosed with a mental illness or anything.

Other possibly relevant information: when Zaee programmers program AIs (not the recursively self-improving kind; much more standard-issue sci-fi types), they apply the same principle, and don't "keep" the AIs in their own employ past a certain point. (A particular tradition of programming frequently has its graduates arrange beforehand to swap their AIs.) The AIs normally don't run on mobile hardware, which is irrelevant anyway, because the point in question for them isn't flight. However, Zaee are not particularly offended by the practice of human programmers keeping their own AIs indefinitely. The Zaee would be very upset if humans genetically engineered themselves to have wings from birth which became usable before adulthood and this didn't yield a change in human fostering habits. (I have yet to have cause to get a Zaee interacting with another alien species that can also fly in the game for which they were designed, but anticipate that if I did so, "grimly distasteful bare-tolerance" would be the most appropriate attitude for the Zaee in the interaction. They're not very violent.)

And the question: Are the Zaee "interested in morality"? Are we interested in <Zaee word that most nearly translates as "morality">? Do the two referents mean distinct concepts that just happen to overlap some or be compatible in a special way? How do you talk about this situation, using the words you have appropriated?

Comment author: Alicorn 01 February 2010 02:28:12AM 3 points [-]

They're actually better justifications

"Better" by the moral standard of betterness, or by a standard unconnected to morality itself?

if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to!

Want to respond to because you happen to be the sort of creature that likes and is interested in these facts, or for some reason external to morality and your interest therein?

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance

Why does this seem like a "drastic" assumption, even given your definition of "morality"?

Comment author: Eliezer_Yudkowsky 01 February 2010 02:31:53AM *  0 points [-]

I don't see why I'd want to use an immoral standard. I don't see why I ought to care about a standard unconnected to morality. And yes, I'm compelled by the sort of logical facts we name "moral justifications" physically-because I'm the sort of physical creature I am.

It's drastic because it closes down the possibility of further discourse.

Comment author: Alicorn 01 February 2010 02:32:59AM 7 points [-]

Is there some way in which this is not all fantastically circular?

Comment author: Psy-Kosh 01 February 2010 03:31:30AM 11 points [-]

How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"

It would seem also that the answer is yes, it is moral to care about morality.

Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.

But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)

It's not quite circular. Us and the paperclipper creatures wouldn't really disagree about anything. They'd say "turning all the matter in the solar system into paperclips is paperclipish", and we'd agree. We'd say "it's more moral not to do so", and they'd agree.

The catch is that they don't give a dingdong about morality, and we don't give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by "better", they'd agree. But then, that criteria that we were referring to by the term "better" is simply not something the paperclippers care about.

"we happen to care about it" is not the justification. It's moral is the justification. It's just that our criteria for valid moral justification is, well... morality. Which is as it should be. etc etc.

Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.

Comment author: byrnema 01 February 2010 04:02:53AM *  9 points [-]

I don't understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.

I don't understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.

I'm not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn't something I worry about anyway. But I would like to part with the admonition that I don't see any reason why LW should be separating so many words from their original meanings -- "good", "better", "should", etc. It doesn't seem to be clarifying things even for you guys.

I think that when something is understood -- really understood -- you can write it down in words. If you can't describe an understanding, you don't own it.

Comment author: Psy-Kosh 01 February 2010 04:17:35AM 2 points [-]

Huh? I'm asserting that most people, when they use words like "morality", "should"(in a moral context), "better"(ditto), etc, are pointing at the same thing. That is, we think this sort of thing partly captures what people actually mean by the terms. Now, we don't have full self knowledge, and our morality algorithm hasn't finished reflecting (that is, hasn't finished reconsidering itself, etc), so we have uncertainty about what sorts of things are or are not moral... But that's a separate issue.

As far as the rest... I'm pretty sure I understand the basic idea. Anything I can do to help clarify it?

How about this: "morality is objective, and we simply happen to be the sorts of beings that care about morality as opposed to, say, evil psycho alien bots that care about maximizing paperclips instead of morality"

Does that help at all?

Comment author: Alicorn 01 February 2010 03:36:41AM 4 points [-]

It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It's right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify "rightness", including the rightness of caring about morality.

Comment author: Psy-Kosh 01 February 2010 03:54:53AM 6 points [-]

It's more almost, well, I hate to say this, but more a matter of definitions.

ie, what do you MEAN by the term "right"?

Just keep poking your brain about that, and keep poking your brain about what you mean by "should" and what you actually mean by terms like "morality" and I think you'll find that all those terms are pointing at the same thing.

It's not so much "there's this criteria of 'rightness' that only morality has the ability to measure" but rather an appeal to morality is what we mean when we say stuff like "'should' we do this? is it 'right'?" etc...

The situation is more, well, like this:

Humans: "Morality says that, among other things, it's more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things"

Paperclipers: "having scanned your brains to see what you mean by these terms, we agree with your statement."

Paperclippers: "Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish."

Humans: "having scanned your minds to determine what you actually mean by those terms, we agree with your statement."

Humans: "However, we don't care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you."

Paperclippers: "We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance."

This is very different from what we normally think of as circular arguments, which would be of the form of "A, therefore B, therefore A, QED", while the other side would be "no! not A"

Here, all sides agree about stuff. It's just that they value different things. But the fact of humans valuing the stuff isn't the justification for valuing that stuff. The justification is that it's moral. But the fact is that we happen to be moved by arguments like "it's moral", rather than the wicked paperclippers that only care about whether it's paperclipish or not.

Comment author: Breakfast 01 February 2010 05:56:54AM *  0 points [-]

But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you're already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.

Comment author: Psy-Kosh 01 February 2010 06:05:32AM 4 points [-]

"should"

What do you mean by "should"? Do you actually mean anything by it other than an appeal to morality in the first place?

Comment author: RomanDavis 24 May 2010 04:22:55PM 2 points [-]

Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.

Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.

It's not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a "moral decision" for having grand children.

Comment author: Psy-Kosh 24 May 2010 04:45:44PM 2 points [-]

First part yes, rest... not quite. (or maybe I'm misunderstanding you?)

"Protecting one's own genetic line" would be more the evolutionary reason. ie, part of the process that led to us valuing morality as opposed to valuing paperclips. (or, hypothetically fictionally alternately, part of the process that led to the Babyeaters valuing babyeating instead of valuing morality.)

But that's not exactly a moral justification as much as it is part of an explanation of why we care about morality. We should save babies... because! ie, Babies (or people in general, for that matter) dying is bad. Killing innocent sentients, especially those that have had the least opportunity to live, is extra bad. The fact that I care about this is ultimately in part explained via evolutionary processes, but that's not the justification.

The hypothetical Babyeaters do not care about morality. That's kind of the point. It's not that they've come to different conclusions about morality as much as the thing that they value isn't quite morality in the first place.

Comment author: RomanDavis 29 May 2010 04:45:00PM 0 points [-]

I... don't think so. One theory of morality is that killing death is bad. Sure, that's at least a component of most moral systems, but there are certain circumstance under which killing is good or okay. Such as if the person you're killing is a Nazi or a werewolf or if they are a fetus you could not support to adulthood or trying to kill you or a death row inmate guilty of a crime by rule of law.

Justifications for killing are often moral.

Babyeaters are, in a way at least possessing similarities to human morality, justified by giving the fewer remaining children a chance at a life with the guidance of adult babyeaters, and more resources since they don't have to compete against millions of their siblings.

This allows babyeaters to develop something like empathy, affection, bonding, love and happiness for the surviving babyeater kind. Without this, babyeaters would be unable to make a babyeater society, and it's really easy to apply utilitarianism to it in the same way utilitarian theory can apply utilitarian theory to human morality.

It's also justified because it's an individual sacrifice to your own genetic line, rather than the eating other babyeater's children, which is the type of a grandchildren maximizer would do. The need of the many > The wants of the few, which also plays a part in various theories of morality.

I'd say they reached the same conclusion that we did about most things, it's just they took necessary and important moral sacrifice, and turned it into a ritual that is now detached from morality.

It damn well sounds like we're talking about the same thing. The only objection I can think of is that they re aliens and that that would be highly improbable, but if morality is just an evolutionary optimization strategy among intelligent minds, even something that could be computed mathematically, then it isn't necessarily any more unlikely than that certain parts of human and plant anatomy would follow the Fibonacci sequence.

Comment author: Eliezer_Yudkowsky 01 February 2010 03:35:12AM 3 points [-]

Only in the sense that "2 + 2 = 4" is not fantastically circular.

Comment author: prase 03 February 2010 01:04:31PM *  0 points [-]

In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don't know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.

So, there is a computation named "morality", but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don't see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.

Comment author: Rain 09 February 2010 08:44:39PM *  0 points [-]

It seems you've encountered a curiosity-stopper, and are no longer willing to consider changes to your thoughts on morality, since that would be immoral. Is this the case?

Comment author: Eliezer_Yudkowsky 10 February 2010 12:53:33AM 2 points [-]

Wha? No. But you'd have to offer me a moral reason, as opposed to an immoral one.

Comment author: Alicorn 10 February 2010 01:00:52AM 3 points [-]

How about amoral reasons? Are those okay?

Comment author: Eliezer_Yudkowsky 10 February 2010 12:51:31PM 0 points [-]

...I'd like to see an example?

Comment author: Alicorn 10 February 2010 01:18:03PM 0 points [-]

Under your definition I'm not sure if such things exist; I was mostly being silly.

Comment author: Zack_M_Davis 01 February 2010 03:59:33AM 5 points [-]

this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended

Yes, but do you see why people get annoyed when you build that courtesy into your terminology?

Comment author: LauraABJ 01 February 2010 03:32:52AM 6 points [-]

Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...

I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact that these people, given their druthers and their own algorithms would morally disagree. Evolutionary psychology is a very fine just-so story for many things that people do, but people's, dare I say, aesthetic sense of right and wrong is largely driven by culture and circumstance. What would you say if omega looked at the people of earth and said, "Yes, there is enough agreement on what 'morality' is that we need only define 80,000 separate logically consistent moral algorithms to cover everybody!"

Comment author: byrnema 01 February 2010 01:43:09AM *  0 points [-]

However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating.

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should? Would your position hold that it is unlikely for them to have a different list or that they must be mistaken about the list -- that caring about what you "should" do means having the list we have?

Comment author: Eliezer_Yudkowsky 01 February 2010 01:48:21AM *  1 point [-]

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should?

How'd they end up with the same premises and different conclusions? Broken reasoning about implications, like the human practice of rationalization? Bad empirical pictures of the physical universe leading to poor policy? If so, that all sounds like a perfectly ordinary situation.

Comment author: byrnema 01 February 2010 02:03:59AM *  0 points [-]

How'd they end up with the same premises and different conclusions?

They care about doing what is morally right, but they have different values. The baby-eaters, for example, thought it was morally right to optimize whatever they were optimizing with eating the babies, but didn't particularly value their babies' well-being.

Comment author: orthonormal 01 February 2010 02:40:53AM *  4 points [-]

Er, you might have missed the ancestor of this thread. In the conflict between fundamentally different systems of preference and value (more different than those of any two humans), it's probably more confusing than helpful to use the word "should" with the other one. Thus we might introduce another word, should2, which stands in relation to the aliens' mental constitution (etc) as should stands to ours.

This distinction is very helpful, because we might (for example) conclude from our moral reasoning that we should respect their moral values, and then be surprised that they don't reciprocate, if we don't realize that that aspect of should needn't have any counterpart in should2. If you use the same word, you might waste time trying to argue that the aliens should do this or respect that, applying the kind of moral reasoning that is valid in extrapolating should; when they don't give a crap for what they should do, they're working out what they should2 do.

(This is more or less the same argument as in Moral Error and Moral Disagreement, I think.)

Comment author: byrnema 01 February 2010 02:52:07AM 3 points [-]

I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.

Comment author: Douglas_Knight 01 February 2010 05:11:16AM 1 point [-]

I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.

I think it's perfectly clear. Eliezer seems to disapprove of this usage and I think he claims that it is not clear, but I'm less sure of that.

I propose that a moral relativist is someone who like this usage.

Comment author: TheAncientGeek 29 May 2014 03:35:29PM -1 points [-]

There remains a third option in addition to evolutionary hardwired stuff and ineffable, transcendent stuff.

Comment author: aausch 01 February 2010 01:42:39AM 1 point [-]

This is the interpretation I also have of Eliezer's view, and it confuses me, as it applies to the story.

For example, I would expect aliens which do not value morality would be significantly more difficult to communicate with.

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

I interpreted the story as showing aliens which, as a quirk of their history and culture, have significant holes in their morality - holes which, given enough time, I would expect will disappear.

Comment author: orthonormal 01 February 2010 02:48:49AM 2 points [-]

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

Really? Although babyeater_should coincides with akon_should on the notion of "toleration of reasonable mistakes" and on the Prisoner's Dilemma, it seems clear from the story that these functions wouldn't converge on the topic of "eating babies". (If the Superhappies had their way, both functions would just be replaced by a new "compromise" function, but neither the Babyeaters nor the humans want that, and it appears to be the wrong choice according to both babyeater_should and akon_should.)