Roko comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 31 January 2010 08:31:18PM 5 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making about 'what's right' are answers to a well-formed question and have a truth value" and "the subject matter of moral discourse is a transcendent ineffable stuff floating out there which compels all agents to obey and which could make murder right by having a different state". Thinking that moral statements have a truth value is cognitivism, which sounds much less ambiguous to me, and that's why I prefer to talk about moral cognitivism rather than moral realism.

As a moral cognitivist, I would look at your diagram and disagree that the Baby-Eating Aliens and humans have different views of the same subject matter, rather, we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

I have a pending post-to-write on how, to the best of my knowledge, there are only two sorts of things that can make a proposition "true", namely physical events and logical implications, and of course mixtures of the two. I mention this because we have a legitimate epistemic preference for simpler hypotheses about the causes of physical events, but no such thing as an epistemic preference for "simpler axioms" when we are talking about logical facts. We may have an aesthetic preference for simpler axioms in math, but that is not the same thing. If there's no preference for simpler assumptions, that doesn't mean the issue is not a factual one, but it may suggest that we are dealing with logical facts rather than physical facts (statements which are made true by which conclusions follow from which premises, rather than the state of a causal event).

Added: Since I have a definite criterion for something being a "fact", I defend the notion of fact-ness against the charge of being a floating extra.

Comment deleted 31 January 2010 09:04:25PM [-]
Comment author: Eliezer_Yudkowsky 31 January 2010 09:06:59PM 1 point [-]

But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts. "What should be done with the universe" invokes a criterion of preference, "should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe, and the fact that the humans are out trying to make the universe make the way it should look, and you call these two facts a "disagreement", I don't understand what physical fact or logical fact is supposed to be the common subject matter which is being referred-to. They do the babyeating thing, we do the right thing; that's not a subject matter.

Comment author: Alicorn 01 February 2010 01:09:04AM 5 points [-]

The rampant dismissal of so many restatements of your position has tempted me to try my own. Tell me if I've got it right or not:

There is a topic, which covers such subtopics as those listed here, which is the only thing in fact referred to by the English word "morality" and associated terms like "should" and "right". It is an error to refer to other things, like eating babies, as "moral" in the same way it would be an error to refer to black-and-white Asian-native ursine creatures as "lobsters": people who do it simply aren't talking about morality. Once the subject matter of morality is properly nailed down, and all other facts are known, there's no room for disagreement about morality, what ought to be done, what actions are wrong, etc. any more than there is about the bachelorhood of unmarried men. However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating. Humans, as a matter of a rather lucky causal history, do care about morality, in much the same way that pebblesorters care about primes - it's just one of the things we're built to find worth thinking about and working towards. By a similar token, we are responsive to arguments about features of situations that give them moral character of one sort or another.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:26:08AM *  1 point [-]

...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.

Comment author: Unknowns 01 February 2010 07:28:36AM 7 points [-]

Eliezer, I don't understand how you can say that the "lucky causal history" wasn't luck, unless you also say "if humans had evolved to eat babies, babyeating would have been right."

If it wouldn't have been right even in that event, then it took a stupendous amount of luck for us to evolve in just such a way that we care about things that are right, instead of other things.

Either that or there is a shadowy figure.

Comment author: aleksiL 01 February 2010 04:43:14PM 2 points [-]

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".

Comment author: Unknowns 01 February 2010 05:04:36PM 3 points [-]

Precisely. So it was luck that we instantiate this algorithm, instead of a different one.

Comment author: Alicorn 02 February 2010 06:00:31AM *  6 points [-]

I'm curious about how your idea handles an edge case. (I am merely curious - not to downplay curiosity, but you shouldn't consider it a reason to devote considerable brain-cycles on its own if it'd take considerable brain-cycles to answer, because I think your appropriation of moral terminology is silly and I won't find the answer useful for any specific purpose.)

The edge case: I have invented an alien species called the Zaee (for freeform roleplaying game purposes; it only recently occurred to me that they have bearing on this topic). The Zaee have wings, and can fly starting in early childhood. They consider it "loiyen" (the Zaee word that most nearly translates as "morally wrong") for a child's birth mother to continue raising her offspring (call it a son) once he is ready to take off for the first time; they deal with this by having her entrust her son to a friend, or a friend of the father, or, in an emergency, somebody who's in a similar bind and can just swap children with her. Someone who has a child without a plan for how to foster him out at the proper time (even if it's "find a stranger to swap with") is seen as being just as irresponsible as a human mother who had a child without a clue how she planned to feed him would be (even if it's "rely on government assistance").

There is no particular reason why a Zaee child raised to adulthood by his biological mother could not wind up within the Zaee-normal range of psychology (not that they'd ever let this be tested experimentally); however, they'd find this statement about as compelling as the fact that there's no reason a human child, kidnapped as a two-year-old from his natural parents and adopted by a duped but competent couple overseas, couldn't grow up to be a normal human: it still seems a dreadful thing to do, and to the child, not just to the parents.

When Zaee interact with humans they readily concede that this precept of their <moral system> has no bearing on any human action whatever: human children cannot fly. And in the majority of other respects, Zaee are like humans in their <morality> - if you plopped a baby Zaee brain in a baby human body (and resolved the body dysphoria and aging rate issues) and he grew up on Earth, he'd be darned quirky, but wouldn't be diagnosed with a mental illness or anything.

Other possibly relevant information: when Zaee programmers program AIs (not the recursively self-improving kind; much more standard-issue sci-fi types), they apply the same principle, and don't "keep" the AIs in their own employ past a certain point. (A particular tradition of programming frequently has its graduates arrange beforehand to swap their AIs.) The AIs normally don't run on mobile hardware, which is irrelevant anyway, because the point in question for them isn't flight. However, Zaee are not particularly offended by the practice of human programmers keeping their own AIs indefinitely. The Zaee would be very upset if humans genetically engineered themselves to have wings from birth which became usable before adulthood and this didn't yield a change in human fostering habits. (I have yet to have cause to get a Zaee interacting with another alien species that can also fly in the game for which they were designed, but anticipate that if I did so, "grimly distasteful bare-tolerance" would be the most appropriate attitude for the Zaee in the interaction. They're not very violent.)

And the question: Are the Zaee "interested in morality"? Are we interested in <Zaee word that most nearly translates as "morality">? Do the two referents mean distinct concepts that just happen to overlap some or be compatible in a special way? How do you talk about this situation, using the words you have appropriated?

Comment author: Alicorn 01 February 2010 02:28:12AM 3 points [-]

They're actually better justifications

"Better" by the moral standard of betterness, or by a standard unconnected to morality itself?

if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to!

Want to respond to because you happen to be the sort of creature that likes and is interested in these facts, or for some reason external to morality and your interest therein?

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance

Why does this seem like a "drastic" assumption, even given your definition of "morality"?

Comment author: Eliezer_Yudkowsky 01 February 2010 02:31:53AM *  0 points [-]

I don't see why I'd want to use an immoral standard. I don't see why I ought to care about a standard unconnected to morality. And yes, I'm compelled by the sort of logical facts we name "moral justifications" physically-because I'm the sort of physical creature I am.

It's drastic because it closes down the possibility of further discourse.

Comment author: Alicorn 01 February 2010 02:32:59AM 7 points [-]

Is there some way in which this is not all fantastically circular?

Comment author: Psy-Kosh 01 February 2010 03:31:30AM 11 points [-]

How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"

It would seem also that the answer is yes, it is moral to care about morality.

Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.

But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)

It's not quite circular. Us and the paperclipper creatures wouldn't really disagree about anything. They'd say "turning all the matter in the solar system into paperclips is paperclipish", and we'd agree. We'd say "it's more moral not to do so", and they'd agree.

The catch is that they don't give a dingdong about morality, and we don't give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by "better", they'd agree. But then, that criteria that we were referring to by the term "better" is simply not something the paperclippers care about.

"we happen to care about it" is not the justification. It's moral is the justification. It's just that our criteria for valid moral justification is, well... morality. Which is as it should be. etc etc.

Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.

Comment author: byrnema 01 February 2010 04:02:53AM *  9 points [-]

I don't understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.

I don't understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.

I'm not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn't something I worry about anyway. But I would like to part with the admonition that I don't see any reason why LW should be separating so many words from their original meanings -- "good", "better", "should", etc. It doesn't seem to be clarifying things even for you guys.

I think that when something is understood -- really understood -- you can write it down in words. If you can't describe an understanding, you don't own it.

Comment author: Alicorn 01 February 2010 03:36:41AM 4 points [-]

It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It's right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify "rightness", including the rightness of caring about morality.

Comment author: RomanDavis 24 May 2010 04:22:55PM 2 points [-]

Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.

Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.

It's not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a "moral decision" for having grand children.

Comment author: Eliezer_Yudkowsky 01 February 2010 03:35:12AM 3 points [-]

Only in the sense that "2 + 2 = 4" is not fantastically circular.

Comment author: prase 03 February 2010 01:04:31PM *  0 points [-]

In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don't know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.

So, there is a computation named "morality", but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don't see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.

Comment author: Rain 09 February 2010 08:44:39PM *  0 points [-]

It seems you've encountered a curiosity-stopper, and are no longer willing to consider changes to your thoughts on morality, since that would be immoral. Is this the case?

Comment author: Eliezer_Yudkowsky 10 February 2010 12:53:33AM 2 points [-]

Wha? No. But you'd have to offer me a moral reason, as opposed to an immoral one.

Comment author: Alicorn 10 February 2010 01:00:52AM 3 points [-]

How about amoral reasons? Are those okay?

Comment author: Zack_M_Davis 01 February 2010 03:59:33AM 5 points [-]

this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended

Yes, but do you see why people get annoyed when you build that courtesy into your terminology?

Comment author: LauraABJ 01 February 2010 03:32:52AM 6 points [-]

Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...

I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact that these people, given their druthers and their own algorithms would morally disagree. Evolutionary psychology is a very fine just-so story for many things that people do, but people's, dare I say, aesthetic sense of right and wrong is largely driven by culture and circumstance. What would you say if omega looked at the people of earth and said, "Yes, there is enough agreement on what 'morality' is that we need only define 80,000 separate logically consistent moral algorithms to cover everybody!"

Comment author: byrnema 01 February 2010 01:43:09AM *  0 points [-]

However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating.

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should? Would your position hold that it is unlikely for them to have a different list or that they must be mistaken about the list -- that caring about what you "should" do means having the list we have?

Comment author: Eliezer_Yudkowsky 01 February 2010 01:48:21AM *  1 point [-]

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should?

How'd they end up with the same premises and different conclusions? Broken reasoning about implications, like the human practice of rationalization? Bad empirical pictures of the physical universe leading to poor policy? If so, that all sounds like a perfectly ordinary situation.

Comment author: byrnema 01 February 2010 02:03:59AM *  0 points [-]

How'd they end up with the same premises and different conclusions?

They care about doing what is morally right, but they have different values. The baby-eaters, for example, thought it was morally right to optimize whatever they were optimizing with eating the babies, but didn't particularly value their babies' well-being.

Comment author: orthonormal 01 February 2010 02:40:53AM *  4 points [-]

Er, you might have missed the ancestor of this thread. In the conflict between fundamentally different systems of preference and value (more different than those of any two humans), it's probably more confusing than helpful to use the word "should" with the other one. Thus we might introduce another word, should2, which stands in relation to the aliens' mental constitution (etc) as should stands to ours.

This distinction is very helpful, because we might (for example) conclude from our moral reasoning that we should respect their moral values, and then be surprised that they don't reciprocate, if we don't realize that that aspect of should needn't have any counterpart in should2. If you use the same word, you might waste time trying to argue that the aliens should do this or respect that, applying the kind of moral reasoning that is valid in extrapolating should; when they don't give a crap for what they should do, they're working out what they should2 do.

(This is more or less the same argument as in Moral Error and Moral Disagreement, I think.)

Comment author: byrnema 01 February 2010 02:52:07AM 3 points [-]

I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.

Comment author: TheAncientGeek 29 May 2014 03:35:29PM -1 points [-]

There remains a third option in addition to evolutionary hardwired stuff and ineffable, transcendent stuff.

Comment author: aausch 01 February 2010 01:42:39AM 1 point [-]

This is the interpretation I also have of Eliezer's view, and it confuses me, as it applies to the story.

For example, I would expect aliens which do not value morality would be significantly more difficult to communicate with.

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

I interpreted the story as showing aliens which, as a quirk of their history and culture, have significant holes in their morality - holes which, given enough time, I would expect will disappear.

Comment author: orthonormal 01 February 2010 02:48:49AM 2 points [-]

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

Really? Although babyeater_should coincides with akon_should on the notion of "toleration of reasonable mistakes" and on the Prisoner's Dilemma, it seems clear from the story that these functions wouldn't converge on the topic of "eating babies". (If the Superhappies had their way, both functions would just be replaced by a new "compromise" function, but neither the Babyeaters nor the humans want that, and it appears to be the wrong choice according to both babyeater_should and akon_should.)

Comment author: loqi 01 February 2010 10:01:13AM 3 points [-]

The problem I have with this use of the words "should" and "good" is that it treats the them like semantic primitives, rather than functions of context. We use them in explicitly delimited contexts all the time:

  • "If you want to see why the server crashed, you should check the logs."
  • "You should play Braid, if platformers are your thing."
  • "You should invest in a quality fork, if you plan on eating many babies."
  • "They should glue their pebble heaps together, if they want them to retain their primality."

Since I'm having a hard time parting with the "should" of type "Goal context -> Action on causal path to goal", the only sense I can make out of your position is that "if your goal is [extensional reference to the stuff that compels humans]" is a desirable default context.

If you agree that "What should be done with the universe" is a different question than "What should be done with the universe if we want to maximize entropy as quickly as possible", then either you're agreeing that what we want causally affects should-ness, or you're agreeing that the issue isn't really "should"'s meaning, it's what the goal context should be when not explicitly supplied. And you seem to be saying that it should be an extensional reference to commonplace human morality.

Comment author: Wei_Dai 31 January 2010 09:25:33PM *  3 points [-]

But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts.

Suppose I ask

  • What is rationality?
  • Is UDT the right decision theory?
  • What is the right philosophy of mathematics?

Am I asking about physical facts or logical/mathematical facts? It seems like I'm asking about a third category of "philosophical facts".

We could say that the answer to "what is rationality" is whatever my meta-rationality computes, and hence reduce it to a physical+logical fact, but that really doesn't seem to help at all.

Comment author: Eliezer_Yudkowsky 31 January 2010 10:49:02PM 1 point [-]

These all sound to me like logical questions where you don't have conscious access to the premises you're using, and can only try to figure out the premises by looking at what seem like good or bad conclusions. But with respect to the general question of whether we are talking about (a) the way events are or (b) which conclusions follow from which premises, it sounds like we're doing the latter. Other "philosophical" questions (like 'What's up with the Born probabilities?' or 'How should I compute anthropic probabilities?') may actually be about (a).

Comment author: Wei_Dai 01 February 2010 09:24:46AM *  3 points [-]

Your answer seemed wrong to me, but it took me a long time to verbalize why. In the end, I think it's a map/territory confusion.

For comparison, suppose I'm trying to find the shortest way from home to work by visualizing a map of the city. I'm doing a computation in my mind, which can also be viewed as deriving implications from a set of premises. But that computation is about something external; and the answer isn't just a logical fact about what conclusions follow from certain premises.

When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me, and it's not just a logical question where I don't have conscious access to the premises that I'm using, even though that's also the case.

So my definition of moral realism would be that when I do the meta-moral computation of asking "what moral premises should I accept?", that computation is about something that is not just inside my head. I think this is closer to what most people mean by the phrase.

Given the above, I think your meta-ethics is basically a denial of moral realism, but in such a way that it causes more confusion than clarity. Your position, if translated into the "shortest way to work" example, would be if someone told you that there is no fact of the matter about the shortest way to work because the whole city is just a figment of your imagination, and you reply that there is a fact of the matter about the computation in your mind, and that's good enough for you to call yourself a realist.

Comment author: Eliezer_Yudkowsky 01 February 2010 09:47:22AM 2 points [-]

When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me

Well, if you're asking about human rationality, then the prudent-way-to-think involves lots of empirical info about the actual flaws in human cognition, and so on. If you're asking about rationality in the sense of probability theory, then the only reference to the actual that I can discern is about anthropics and possibly prudent priors - things like the Dutch Book Argument are math, which we find compelling because of our values.

If you think that we're referring to something else - what is it, where is it stored? Is there a stone tablet somewhere on which these things are written, on which I can scrawl graffiti to alter the very fabric of rationality? Probably not - so where are the facts that the discourse is about, in your view?

Comment author: Wei_Dai 01 February 2010 10:32:11AM 0 points [-]

I think "what is rationality" (and by that I mean ideal rationality) is like "does P=NP". There is some fact of the matter about it that is independent of what premises we choose to, or happen to, accept. I wish I knew where these facts live, or exactly how it is that we have any ability to determine them, but I don't. Fortunately, I don't think that really weakens my argument much.

Comment author: Eliezer_Yudkowsky 01 February 2010 10:42:58AM 4 points [-]

This is exactly what I refer to as a "logical fact" or "which conclusions follow from which premises". Wasn't that clear?

Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links, i.e., if the axioms are true of a model then the theorem is true of that model. Which is, I think, conventional in mathematics, but I suppose it could be less obvious.

In the case of P!=NP, you'll still need some axioms to prove it, and the axioms will identify the subject matter - they will let you talk about computations and running time, just as the Peano axioms identify the subject matter of the integers. It's not that you can make 2 + 2 = 5 by believing differently about the same subject matter, but that different axioms would cause you to be talking about a different subject matter than what we name the "integers".

Is this starting to sound a little familiar?

Comment author: Wei_Dai 01 February 2010 12:22:10PM *  1 point [-]

Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links

But that's not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they're just parts of math that we don't understand.

ETA: To put it another way, if you ask someone who is working on the P!=NP question what he's doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He's going to answer that he's trying to determine whether P!=NP. If those axioms don't work out, he'll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that's "out there". But according to your meta-ethics, there is no "out there" for morality. So why does it deserve to be called realism?

Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?

Comment author: ata 01 February 2010 10:47:36AM *  4 points [-]

I haven't finished reading your meta-ethics sequence, so I apologize in advance if this is something that you've already addressed, but just from this exchange, I'm wondering:

Suppose that instead of talking about humans and Babyeaters, we talk about groups of humans with equally strong feelings of morality but opposite ideas about it. Suppose we take one person who feels moral when saving a little girl from being murdered, and another person who feels moral when murdering a little girl as punishment for having being raped. This seems closely analogous to your "Morality is about how to save babies, not eat them, everyone knows that and they happen to be right." It would sound just as reasonable to say that everybody knows that morality is about saving children rather than murdering them, but sadly, it's not the case that "everybody knows" this: as you know, there are cultures existing right now where a girl would be put to death by honestly morally-outraged elders for the abominable sin of being raped, horrifying though this fact is.

So let's take two people (or two larger groups of people, if you prefer) from each of these cultures. We could have them imagine these actions as intensely as possible, and scan their brains for relevant electrical and chemical information, find out what parts of the brain are being used and what kinds of emotions are active. (If a control is needed, we could scan the brain of someone intensely imagining some action everyone would consider irrelevant to morality, such as brushing one's teeth. I don't think there are any cultures that deem that evil, are there?) If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling? Or would you conclude that this is a situation where two people are talking about the same subject matter but have drastically opposing ideas about it?

If the latter is the case, then I do think I get the point of the Babyeater thought experiments: although they appear to us to have some mechanism of making moral judgments (judgments that we find horrible), this mechanism serves different cognitive functions for them than our moral intuition does for us, and it originated in them for different reasons. Therefore, they cannot be reasonably considered to be differently-calibrated versions of the same feature. Is that right?

Comment author: Eliezer_Yudkowsky 01 February 2010 06:14:23PM *  4 points [-]

If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling?

Depends. If the child-murderer knew everything about the true state of affairs and everything about the workings of their own inner mind, would they still disagree with the child-rescuer? If so, then it's pretty futile to pretend that they're talking about the same subject matter when they talk about that-which-makes-me-experience-a-feeling-of-being-justified. It would be like if one species of aliens saw green when contemplating real numbers and another species of aliens saw green when contemplating ordinals; attempts to discuss that-which-makes-me-see-green as if it were the same mathematical subject matter are doomed to chaos. By the way, it looks to me like a strong possibility is that reasonable methods of extrapolating volitions will give you a spread of extrapolated-child-murderers some of which are perfectly selfish hedonists, some of which are child-rescuers, and some of which are Babyeaters.

And yes, this was the approximate point of the Babyeater thought experiment.

Comment deleted 31 January 2010 09:35:36PM *  [-]
Comment author: Eliezer_Yudkowsky 31 January 2010 10:46:53PM 1 point [-]

Mm... I can agree that a treaty has subject matter and is talked about by both parties, and refers to subsequent physical events. It has a treaty-kept-condition which is not quite the same thing as its being "true". (Note: in the original story, no treaty was actually discussed with the Babyeaters.) Where does that put it on a fact/opinion chart?

Comment author: TheAncientGeek 29 May 2014 03:22:45PM 0 points [-]

It looks like you can disagree about values as well as facts.